Test Report: KVM_Linux_crio 19910

                    
                      0805a48cef53763875eefc0e18e5d59dcaccd8a0:2024-11-05:36955
                    
                

Test fail (32/314)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.89
38 TestAddons/parallel/MetricsServer 331.88
47 TestAddons/StoppedEnableDisable 154.27
166 TestMultiControlPlane/serial/StopSecondaryNode 141.48
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.64
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.41
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.4
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 377.54
173 TestMultiControlPlane/serial/StopCluster 141.91
233 TestMultiNode/serial/RestartKeepsNodes 325.86
235 TestMultiNode/serial/StopMultiNode 145.16
242 TestPreload 194.34
250 TestKubernetesUpgrade 425.88
287 TestPause/serial/SecondStartNoReconfiguration 90.88
316 TestStartStop/group/old-k8s-version/serial/FirstStart 316.8
342 TestStartStop/group/no-preload/serial/Stop 139.14
344 TestStartStop/group/embed-certs/serial/Stop 139.03
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.94
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
349 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
351 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
352 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 114.04
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
358 TestStartStop/group/old-k8s-version/serial/SecondStart 704.31
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.06
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.25
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.13
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.36
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 489.83
364 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 425.38
365 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 314.79
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 117.96
x
+
TestAddons/parallel/Ingress (151.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-320753 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-320753 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-320753 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b4adce59-2101-44a5-bcc1-53c27718456c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b4adce59-2101-44a5-bcc1-53c27718456c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004408087s
I1105 17:45:17.175735   15492 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-320753 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.676256545s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-320753 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-320753 -n addons-320753
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-320753 logs -n 25: (1.207960111s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| delete  | -p download-only-083264                                                                     | download-only-083264 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| delete  | -p download-only-753477                                                                     | download-only-753477 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| delete  | -p download-only-083264                                                                     | download-only-083264 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-133090 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | binary-mirror-133090                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38161                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-133090                                                                     | binary-mirror-133090 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| addons  | disable dashboard -p                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | addons-320753                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | addons-320753                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-320753 --wait=true                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:44 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | -p addons-320753                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-320753 ip                                                                            | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-320753 ssh cat                                                                       | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | /opt/local-path-provisioner/pvc-dc83c679-ddcc-4681-bf85-ba96348fe5e0_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:45 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:45 UTC | 05 Nov 24 17:45 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-320753 ssh curl -s                                                                   | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:45 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:45 UTC | 05 Nov 24 17:45 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:45 UTC | 05 Nov 24 17:45 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-320753 ip                                                                            | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:47 UTC | 05 Nov 24 17:47 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:41:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:41:54.631172   16242 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:41:54.631269   16242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:54.631276   16242 out.go:358] Setting ErrFile to fd 2...
	I1105 17:41:54.631280   16242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:54.631441   16242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 17:41:54.632028   16242 out.go:352] Setting JSON to false
	I1105 17:41:54.632921   16242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1457,"bootTime":1730827058,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 17:41:54.632977   16242 start.go:139] virtualization: kvm guest
	I1105 17:41:54.634993   16242 out.go:177] * [addons-320753] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 17:41:54.636266   16242 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 17:41:54.636281   16242 notify.go:220] Checking for updates...
	I1105 17:41:54.638838   16242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:41:54.640171   16242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 17:41:54.641374   16242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 17:41:54.642502   16242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 17:41:54.643629   16242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 17:41:54.644809   16242 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:41:54.675700   16242 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 17:41:54.677002   16242 start.go:297] selected driver: kvm2
	I1105 17:41:54.677018   16242 start.go:901] validating driver "kvm2" against <nil>
	I1105 17:41:54.677034   16242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 17:41:54.677732   16242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:41:54.677818   16242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 17:41:54.692490   16242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 17:41:54.692552   16242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:41:54.692803   16242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:41:54.692836   16242 cni.go:84] Creating CNI manager for ""
	I1105 17:41:54.692874   16242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 17:41:54.692882   16242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 17:41:54.692933   16242 start.go:340] cluster config:
	{Name:addons-320753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:41:54.693018   16242 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:41:54.695468   16242 out.go:177] * Starting "addons-320753" primary control-plane node in "addons-320753" cluster
	I1105 17:41:54.696549   16242 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:41:54.696582   16242 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 17:41:54.696590   16242 cache.go:56] Caching tarball of preloaded images
	I1105 17:41:54.696667   16242 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 17:41:54.696680   16242 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 17:41:54.696963   16242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/config.json ...
	I1105 17:41:54.696983   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/config.json: {Name:mk664197e3260b062aa2572735b9e61ad88cd4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:54.697122   16242 start.go:360] acquireMachinesLock for addons-320753: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 17:41:54.697163   16242 start.go:364] duration metric: took 29.509µs to acquireMachinesLock for "addons-320753"
	I1105 17:41:54.697179   16242 start.go:93] Provisioning new machine with config: &{Name:addons-320753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:41:54.697225   16242 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 17:41:54.698582   16242 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1105 17:41:54.698706   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:41:54.698739   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:41:54.712411   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I1105 17:41:54.712850   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:41:54.713383   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:41:54.713402   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:41:54.713669   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:41:54.713856   16242 main.go:141] libmachine: (addons-320753) Calling .GetMachineName
	I1105 17:41:54.713963   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:41:54.714103   16242 start.go:159] libmachine.API.Create for "addons-320753" (driver="kvm2")
	I1105 17:41:54.714132   16242 client.go:168] LocalClient.Create starting
	I1105 17:41:54.714164   16242 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 17:41:55.005541   16242 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 17:41:55.130181   16242 main.go:141] libmachine: Running pre-create checks...
	I1105 17:41:55.130206   16242 main.go:141] libmachine: (addons-320753) Calling .PreCreateCheck
	I1105 17:41:55.130699   16242 main.go:141] libmachine: (addons-320753) Calling .GetConfigRaw
	I1105 17:41:55.131063   16242 main.go:141] libmachine: Creating machine...
	I1105 17:41:55.131074   16242 main.go:141] libmachine: (addons-320753) Calling .Create
	I1105 17:41:55.131202   16242 main.go:141] libmachine: (addons-320753) Creating KVM machine...
	I1105 17:41:55.132407   16242 main.go:141] libmachine: (addons-320753) DBG | found existing default KVM network
	I1105 17:41:55.133134   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.132998   16264 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I1105 17:41:55.133149   16242 main.go:141] libmachine: (addons-320753) DBG | created network xml: 
	I1105 17:41:55.133162   16242 main.go:141] libmachine: (addons-320753) DBG | <network>
	I1105 17:41:55.133172   16242 main.go:141] libmachine: (addons-320753) DBG |   <name>mk-addons-320753</name>
	I1105 17:41:55.133185   16242 main.go:141] libmachine: (addons-320753) DBG |   <dns enable='no'/>
	I1105 17:41:55.133193   16242 main.go:141] libmachine: (addons-320753) DBG |   
	I1105 17:41:55.133205   16242 main.go:141] libmachine: (addons-320753) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1105 17:41:55.133217   16242 main.go:141] libmachine: (addons-320753) DBG |     <dhcp>
	I1105 17:41:55.133231   16242 main.go:141] libmachine: (addons-320753) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1105 17:41:55.133248   16242 main.go:141] libmachine: (addons-320753) DBG |     </dhcp>
	I1105 17:41:55.133261   16242 main.go:141] libmachine: (addons-320753) DBG |   </ip>
	I1105 17:41:55.133275   16242 main.go:141] libmachine: (addons-320753) DBG |   
	I1105 17:41:55.133287   16242 main.go:141] libmachine: (addons-320753) DBG | </network>
	I1105 17:41:55.133297   16242 main.go:141] libmachine: (addons-320753) DBG | 
	I1105 17:41:55.138540   16242 main.go:141] libmachine: (addons-320753) DBG | trying to create private KVM network mk-addons-320753 192.168.39.0/24...
	I1105 17:41:55.199087   16242 main.go:141] libmachine: (addons-320753) DBG | private KVM network mk-addons-320753 192.168.39.0/24 created
	I1105 17:41:55.199118   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.199066   16264 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 17:41:55.199151   16242 main.go:141] libmachine: (addons-320753) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753 ...
	I1105 17:41:55.199173   16242 main.go:141] libmachine: (addons-320753) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 17:41:55.199232   16242 main.go:141] libmachine: (addons-320753) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 17:41:55.475013   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.474849   16264 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa...
	I1105 17:41:55.517210   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.517077   16264 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/addons-320753.rawdisk...
	I1105 17:41:55.517241   16242 main.go:141] libmachine: (addons-320753) DBG | Writing magic tar header
	I1105 17:41:55.517254   16242 main.go:141] libmachine: (addons-320753) DBG | Writing SSH key tar header
	I1105 17:41:55.517270   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.517201   16264 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753 ...
	I1105 17:41:55.517305   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753
	I1105 17:41:55.517358   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753 (perms=drwx------)
	I1105 17:41:55.517387   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 17:41:55.517400   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 17:41:55.517418   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 17:41:55.517430   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 17:41:55.517442   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 17:41:55.517483   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 17:41:55.517511   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 17:41:55.517521   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 17:41:55.517531   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 17:41:55.517539   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins
	I1105 17:41:55.517544   16242 main.go:141] libmachine: (addons-320753) Creating domain...
	I1105 17:41:55.517557   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home
	I1105 17:41:55.517581   16242 main.go:141] libmachine: (addons-320753) DBG | Skipping /home - not owner
	I1105 17:41:55.518489   16242 main.go:141] libmachine: (addons-320753) define libvirt domain using xml: 
	I1105 17:41:55.518514   16242 main.go:141] libmachine: (addons-320753) <domain type='kvm'>
	I1105 17:41:55.518522   16242 main.go:141] libmachine: (addons-320753)   <name>addons-320753</name>
	I1105 17:41:55.518534   16242 main.go:141] libmachine: (addons-320753)   <memory unit='MiB'>4000</memory>
	I1105 17:41:55.518543   16242 main.go:141] libmachine: (addons-320753)   <vcpu>2</vcpu>
	I1105 17:41:55.518550   16242 main.go:141] libmachine: (addons-320753)   <features>
	I1105 17:41:55.518560   16242 main.go:141] libmachine: (addons-320753)     <acpi/>
	I1105 17:41:55.518569   16242 main.go:141] libmachine: (addons-320753)     <apic/>
	I1105 17:41:55.518576   16242 main.go:141] libmachine: (addons-320753)     <pae/>
	I1105 17:41:55.518583   16242 main.go:141] libmachine: (addons-320753)     
	I1105 17:41:55.518593   16242 main.go:141] libmachine: (addons-320753)   </features>
	I1105 17:41:55.518604   16242 main.go:141] libmachine: (addons-320753)   <cpu mode='host-passthrough'>
	I1105 17:41:55.518618   16242 main.go:141] libmachine: (addons-320753)   
	I1105 17:41:55.518634   16242 main.go:141] libmachine: (addons-320753)   </cpu>
	I1105 17:41:55.518639   16242 main.go:141] libmachine: (addons-320753)   <os>
	I1105 17:41:55.518647   16242 main.go:141] libmachine: (addons-320753)     <type>hvm</type>
	I1105 17:41:55.518652   16242 main.go:141] libmachine: (addons-320753)     <boot dev='cdrom'/>
	I1105 17:41:55.518658   16242 main.go:141] libmachine: (addons-320753)     <boot dev='hd'/>
	I1105 17:41:55.518663   16242 main.go:141] libmachine: (addons-320753)     <bootmenu enable='no'/>
	I1105 17:41:55.518669   16242 main.go:141] libmachine: (addons-320753)   </os>
	I1105 17:41:55.518675   16242 main.go:141] libmachine: (addons-320753)   <devices>
	I1105 17:41:55.518680   16242 main.go:141] libmachine: (addons-320753)     <disk type='file' device='cdrom'>
	I1105 17:41:55.518693   16242 main.go:141] libmachine: (addons-320753)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/boot2docker.iso'/>
	I1105 17:41:55.518707   16242 main.go:141] libmachine: (addons-320753)       <target dev='hdc' bus='scsi'/>
	I1105 17:41:55.518712   16242 main.go:141] libmachine: (addons-320753)       <readonly/>
	I1105 17:41:55.518716   16242 main.go:141] libmachine: (addons-320753)     </disk>
	I1105 17:41:55.518721   16242 main.go:141] libmachine: (addons-320753)     <disk type='file' device='disk'>
	I1105 17:41:55.518729   16242 main.go:141] libmachine: (addons-320753)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 17:41:55.518737   16242 main.go:141] libmachine: (addons-320753)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/addons-320753.rawdisk'/>
	I1105 17:41:55.518743   16242 main.go:141] libmachine: (addons-320753)       <target dev='hda' bus='virtio'/>
	I1105 17:41:55.518748   16242 main.go:141] libmachine: (addons-320753)     </disk>
	I1105 17:41:55.518755   16242 main.go:141] libmachine: (addons-320753)     <interface type='network'>
	I1105 17:41:55.518761   16242 main.go:141] libmachine: (addons-320753)       <source network='mk-addons-320753'/>
	I1105 17:41:55.518767   16242 main.go:141] libmachine: (addons-320753)       <model type='virtio'/>
	I1105 17:41:55.518772   16242 main.go:141] libmachine: (addons-320753)     </interface>
	I1105 17:41:55.518782   16242 main.go:141] libmachine: (addons-320753)     <interface type='network'>
	I1105 17:41:55.518789   16242 main.go:141] libmachine: (addons-320753)       <source network='default'/>
	I1105 17:41:55.518803   16242 main.go:141] libmachine: (addons-320753)       <model type='virtio'/>
	I1105 17:41:55.518811   16242 main.go:141] libmachine: (addons-320753)     </interface>
	I1105 17:41:55.518815   16242 main.go:141] libmachine: (addons-320753)     <serial type='pty'>
	I1105 17:41:55.518821   16242 main.go:141] libmachine: (addons-320753)       <target port='0'/>
	I1105 17:41:55.518825   16242 main.go:141] libmachine: (addons-320753)     </serial>
	I1105 17:41:55.518831   16242 main.go:141] libmachine: (addons-320753)     <console type='pty'>
	I1105 17:41:55.518838   16242 main.go:141] libmachine: (addons-320753)       <target type='serial' port='0'/>
	I1105 17:41:55.518845   16242 main.go:141] libmachine: (addons-320753)     </console>
	I1105 17:41:55.518849   16242 main.go:141] libmachine: (addons-320753)     <rng model='virtio'>
	I1105 17:41:55.518856   16242 main.go:141] libmachine: (addons-320753)       <backend model='random'>/dev/random</backend>
	I1105 17:41:55.518859   16242 main.go:141] libmachine: (addons-320753)     </rng>
	I1105 17:41:55.518864   16242 main.go:141] libmachine: (addons-320753)     
	I1105 17:41:55.518870   16242 main.go:141] libmachine: (addons-320753)     
	I1105 17:41:55.518874   16242 main.go:141] libmachine: (addons-320753)   </devices>
	I1105 17:41:55.518879   16242 main.go:141] libmachine: (addons-320753) </domain>
	I1105 17:41:55.518885   16242 main.go:141] libmachine: (addons-320753) 
	I1105 17:41:55.525389   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:3f:fa:0b in network default
	I1105 17:41:55.525873   16242 main.go:141] libmachine: (addons-320753) Ensuring networks are active...
	I1105 17:41:55.525897   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:55.526434   16242 main.go:141] libmachine: (addons-320753) Ensuring network default is active
	I1105 17:41:55.526696   16242 main.go:141] libmachine: (addons-320753) Ensuring network mk-addons-320753 is active
	I1105 17:41:55.528065   16242 main.go:141] libmachine: (addons-320753) Getting domain xml...
	I1105 17:41:55.528663   16242 main.go:141] libmachine: (addons-320753) Creating domain...
	I1105 17:41:56.922125   16242 main.go:141] libmachine: (addons-320753) Waiting to get IP...
	I1105 17:41:56.922874   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:56.923272   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:56.923299   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:56.923249   16264 retry.go:31] will retry after 268.68519ms: waiting for machine to come up
	I1105 17:41:57.193729   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:57.194218   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:57.194242   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:57.194161   16264 retry.go:31] will retry after 308.815288ms: waiting for machine to come up
	I1105 17:41:57.504533   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:57.505038   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:57.505061   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:57.504980   16264 retry.go:31] will retry after 340.827865ms: waiting for machine to come up
	I1105 17:41:57.847465   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:57.847965   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:57.847995   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:57.847928   16264 retry.go:31] will retry after 532.128569ms: waiting for machine to come up
	I1105 17:41:58.381449   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:58.381866   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:58.381894   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:58.381820   16264 retry.go:31] will retry after 550.436713ms: waiting for machine to come up
	I1105 17:41:58.933369   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:58.933706   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:58.933729   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:58.933662   16264 retry.go:31] will retry after 911.635128ms: waiting for machine to come up
	I1105 17:41:59.847254   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:59.847675   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:59.847703   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:59.847635   16264 retry.go:31] will retry after 971.876512ms: waiting for machine to come up
	I1105 17:42:00.821220   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:00.821644   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:00.821686   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:00.821610   16264 retry.go:31] will retry after 1.397416189s: waiting for machine to come up
	I1105 17:42:02.221022   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:02.221446   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:02.221473   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:02.221402   16264 retry.go:31] will retry after 1.160656426s: waiting for machine to come up
	I1105 17:42:03.383794   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:03.384209   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:03.384239   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:03.384165   16264 retry.go:31] will retry after 1.776821583s: waiting for machine to come up
	I1105 17:42:05.163003   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:05.163322   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:05.163348   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:05.163273   16264 retry.go:31] will retry after 2.125484758s: waiting for machine to come up
	I1105 17:42:07.290208   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:07.290579   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:07.290607   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:07.290526   16264 retry.go:31] will retry after 3.012964339s: waiting for machine to come up
	I1105 17:42:10.305078   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:10.305469   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:10.305490   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:10.305428   16264 retry.go:31] will retry after 2.81216672s: waiting for machine to come up
	I1105 17:42:13.121417   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:13.121817   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:13.121841   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:13.121780   16264 retry.go:31] will retry after 3.6760464s: waiting for machine to come up
	I1105 17:42:16.800415   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:16.800850   16242 main.go:141] libmachine: (addons-320753) Found IP for machine: 192.168.39.201
	I1105 17:42:16.800865   16242 main.go:141] libmachine: (addons-320753) Reserving static IP address...
	I1105 17:42:16.800874   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has current primary IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:16.801198   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find host DHCP lease matching {name: "addons-320753", mac: "52:54:00:89:64:28", ip: "192.168.39.201"} in network mk-addons-320753
	I1105 17:42:16.869887   16242 main.go:141] libmachine: (addons-320753) Reserved static IP address: 192.168.39.201
	I1105 17:42:16.869917   16242 main.go:141] libmachine: (addons-320753) DBG | Getting to WaitForSSH function...
	I1105 17:42:16.869941   16242 main.go:141] libmachine: (addons-320753) Waiting for SSH to be available...
	I1105 17:42:16.872367   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:16.872792   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:minikube Clientid:01:52:54:00:89:64:28}
	I1105 17:42:16.872821   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:16.872944   16242 main.go:141] libmachine: (addons-320753) DBG | Using SSH client type: external
	I1105 17:42:16.872967   16242 main.go:141] libmachine: (addons-320753) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa (-rw-------)
	I1105 17:42:16.873004   16242 main.go:141] libmachine: (addons-320753) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 17:42:16.873017   16242 main.go:141] libmachine: (addons-320753) DBG | About to run SSH command:
	I1105 17:42:16.873033   16242 main.go:141] libmachine: (addons-320753) DBG | exit 0
	I1105 17:42:17.002832   16242 main.go:141] libmachine: (addons-320753) DBG | SSH cmd err, output: <nil>: 
	I1105 17:42:17.003118   16242 main.go:141] libmachine: (addons-320753) KVM machine creation complete!
	I1105 17:42:17.003480   16242 main.go:141] libmachine: (addons-320753) Calling .GetConfigRaw
	I1105 17:42:17.004047   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:17.004390   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:17.004548   16242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 17:42:17.004562   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:17.005768   16242 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 17:42:17.005780   16242 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 17:42:17.005785   16242 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 17:42:17.005790   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.007934   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.008250   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.008276   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.008431   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.008590   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.008728   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.008862   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.009009   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.009213   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.009224   16242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 17:42:17.106022   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 17:42:17.106057   16242 main.go:141] libmachine: Detecting the provisioner...
	I1105 17:42:17.106067   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.108626   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.108912   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.108939   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.109140   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.109320   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.109438   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.109572   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.109703   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.109879   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.109889   16242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 17:42:17.207375   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 17:42:17.207430   16242 main.go:141] libmachine: found compatible host: buildroot
	I1105 17:42:17.207437   16242 main.go:141] libmachine: Provisioning with buildroot...
	I1105 17:42:17.207443   16242 main.go:141] libmachine: (addons-320753) Calling .GetMachineName
	I1105 17:42:17.207693   16242 buildroot.go:166] provisioning hostname "addons-320753"
	I1105 17:42:17.207721   16242 main.go:141] libmachine: (addons-320753) Calling .GetMachineName
	I1105 17:42:17.207908   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.210765   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.211327   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.211350   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.211454   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.211610   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.211745   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.212016   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.212201   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.212375   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.212393   16242 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-320753 && echo "addons-320753" | sudo tee /etc/hostname
	I1105 17:42:17.324149   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-320753
	
	I1105 17:42:17.324173   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.326714   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.327038   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.327065   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.327255   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.327436   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.327592   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.327739   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.327911   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.328084   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.328105   16242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-320753' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-320753/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-320753' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 17:42:17.430829   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 17:42:17.430861   16242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 17:42:17.430888   16242 buildroot.go:174] setting up certificates
	I1105 17:42:17.430900   16242 provision.go:84] configureAuth start
	I1105 17:42:17.430912   16242 main.go:141] libmachine: (addons-320753) Calling .GetMachineName
	I1105 17:42:17.431223   16242 main.go:141] libmachine: (addons-320753) Calling .GetIP
	I1105 17:42:17.433608   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.433940   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.433975   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.434088   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.436116   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.436451   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.436478   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.436618   16242 provision.go:143] copyHostCerts
	I1105 17:42:17.436691   16242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 17:42:17.436797   16242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 17:42:17.436859   16242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 17:42:17.436905   16242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.addons-320753 san=[127.0.0.1 192.168.39.201 addons-320753 localhost minikube]
	I1105 17:42:17.700286   16242 provision.go:177] copyRemoteCerts
	I1105 17:42:17.700341   16242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 17:42:17.700362   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.702758   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.703091   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.703120   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.703277   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.703482   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.703622   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.703773   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:17.781314   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 17:42:17.804852   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 17:42:17.826905   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 17:42:17.848935   16242 provision.go:87] duration metric: took 418.021313ms to configureAuth
	I1105 17:42:17.848962   16242 buildroot.go:189] setting minikube options for container-runtime
	I1105 17:42:17.849136   16242 config.go:182] Loaded profile config "addons-320753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:42:17.849205   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.851739   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.852035   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.852067   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.852215   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.852397   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.852541   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.852680   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.852843   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.853035   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.853050   16242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 17:42:18.065191   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 17:42:18.065223   16242 main.go:141] libmachine: Checking connection to Docker...
	I1105 17:42:18.065232   16242 main.go:141] libmachine: (addons-320753) Calling .GetURL
	I1105 17:42:18.066446   16242 main.go:141] libmachine: (addons-320753) DBG | Using libvirt version 6000000
	I1105 17:42:18.068542   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.068879   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.068910   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.069042   16242 main.go:141] libmachine: Docker is up and running!
	I1105 17:42:18.069056   16242 main.go:141] libmachine: Reticulating splines...
	I1105 17:42:18.069064   16242 client.go:171] duration metric: took 23.354923216s to LocalClient.Create
	I1105 17:42:18.069093   16242 start.go:167] duration metric: took 23.354991027s to libmachine.API.Create "addons-320753"
	I1105 17:42:18.069113   16242 start.go:293] postStartSetup for "addons-320753" (driver="kvm2")
	I1105 17:42:18.069129   16242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 17:42:18.069151   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.069367   16242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 17:42:18.069387   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:18.071473   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.071758   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.071784   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.071919   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:18.072099   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.072240   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:18.072348   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:18.148938   16242 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 17:42:18.152915   16242 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 17:42:18.152937   16242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 17:42:18.153016   16242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 17:42:18.153053   16242 start.go:296] duration metric: took 83.92468ms for postStartSetup
	I1105 17:42:18.153092   16242 main.go:141] libmachine: (addons-320753) Calling .GetConfigRaw
	I1105 17:42:18.153699   16242 main.go:141] libmachine: (addons-320753) Calling .GetIP
	I1105 17:42:18.156143   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.156456   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.156486   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.156698   16242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/config.json ...
	I1105 17:42:18.156871   16242 start.go:128] duration metric: took 23.459639016s to createHost
	I1105 17:42:18.156892   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:18.159843   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.160233   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.160268   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.160413   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:18.160579   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.160731   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.160839   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:18.161005   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:18.161205   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:18.161216   16242 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 17:42:18.259567   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730828538.234711760
	
	I1105 17:42:18.259590   16242 fix.go:216] guest clock: 1730828538.234711760
	I1105 17:42:18.259598   16242 fix.go:229] Guest: 2024-11-05 17:42:18.23471176 +0000 UTC Remote: 2024-11-05 17:42:18.156883465 +0000 UTC m=+23.562279478 (delta=77.828295ms)
	I1105 17:42:18.259625   16242 fix.go:200] guest clock delta is within tolerance: 77.828295ms
	I1105 17:42:18.259656   16242 start.go:83] releasing machines lock for "addons-320753", held for 23.562482615s
	I1105 17:42:18.259682   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.259949   16242 main.go:141] libmachine: (addons-320753) Calling .GetIP
	I1105 17:42:18.262615   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.262939   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.262959   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.263113   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.263487   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.263634   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.263740   16242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 17:42:18.263784   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:18.263803   16242 ssh_runner.go:195] Run: cat /version.json
	I1105 17:42:18.263824   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:18.266380   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.266635   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.266661   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.266700   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.266797   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:18.266980   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.267121   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:18.267219   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.267238   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.267247   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:18.267394   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:18.267540   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.267696   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:18.267819   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:18.339556   16242 ssh_runner.go:195] Run: systemctl --version
	I1105 17:42:18.371653   16242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 17:42:18.530168   16242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 17:42:18.535544   16242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 17:42:18.535606   16242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 17:42:18.550854   16242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 17:42:18.550886   16242 start.go:495] detecting cgroup driver to use...
	I1105 17:42:18.550956   16242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 17:42:18.566002   16242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 17:42:18.579665   16242 docker.go:217] disabling cri-docker service (if available) ...
	I1105 17:42:18.579724   16242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 17:42:18.593161   16242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 17:42:18.606631   16242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 17:42:18.723216   16242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 17:42:18.857282   16242 docker.go:233] disabling docker service ...
	I1105 17:42:18.857340   16242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 17:42:18.871102   16242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 17:42:18.883893   16242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 17:42:19.013709   16242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 17:42:19.121073   16242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 17:42:19.133973   16242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 17:42:19.151047   16242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 17:42:19.151112   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.160667   16242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 17:42:19.160731   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.170353   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.179989   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.189698   16242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 17:42:19.199960   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.209832   16242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.225928   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.235245   16242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 17:42:19.243722   16242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 17:42:19.243770   16242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 17:42:19.256215   16242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 17:42:19.265500   16242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:42:19.373760   16242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 17:42:19.460159   16242 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 17:42:19.460261   16242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 17:42:19.464571   16242 start.go:563] Will wait 60s for crictl version
	I1105 17:42:19.464641   16242 ssh_runner.go:195] Run: which crictl
	I1105 17:42:19.468045   16242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 17:42:19.509755   16242 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 17:42:19.509865   16242 ssh_runner.go:195] Run: crio --version
	I1105 17:42:19.537428   16242 ssh_runner.go:195] Run: crio --version
	I1105 17:42:19.565435   16242 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 17:42:19.566881   16242 main.go:141] libmachine: (addons-320753) Calling .GetIP
	I1105 17:42:19.569222   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:19.569500   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:19.569522   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:19.569713   16242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 17:42:19.573490   16242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:42:19.585478   16242 kubeadm.go:883] updating cluster {Name:addons-320753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 17:42:19.585603   16242 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:42:19.585646   16242 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:42:19.615782   16242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 17:42:19.615868   16242 ssh_runner.go:195] Run: which lz4
	I1105 17:42:19.619479   16242 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 17:42:19.623333   16242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 17:42:19.623363   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 17:42:20.741435   16242 crio.go:462] duration metric: took 1.121995054s to copy over tarball
	I1105 17:42:20.741499   16242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 17:42:22.849751   16242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108227s)
	I1105 17:42:22.849776   16242 crio.go:469] duration metric: took 2.108317016s to extract the tarball
	I1105 17:42:22.849783   16242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 17:42:22.886121   16242 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:42:22.925831   16242 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 17:42:22.925853   16242 cache_images.go:84] Images are preloaded, skipping loading
	I1105 17:42:22.925863   16242 kubeadm.go:934] updating node { 192.168.39.201 8443 v1.31.2 crio true true} ...
	I1105 17:42:22.926008   16242 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-320753 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 17:42:22.926092   16242 ssh_runner.go:195] Run: crio config
	I1105 17:42:22.970242   16242 cni.go:84] Creating CNI manager for ""
	I1105 17:42:22.970265   16242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 17:42:22.970276   16242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 17:42:22.970304   16242 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-320753 NodeName:addons-320753 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 17:42:22.970451   16242 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-320753"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.201"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 17:42:22.970519   16242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 17:42:22.979761   16242 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 17:42:22.979834   16242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 17:42:22.988460   16242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1105 17:42:23.004119   16242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 17:42:23.019649   16242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1105 17:42:23.035330   16242 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I1105 17:42:23.039201   16242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:42:23.050811   16242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:42:23.172474   16242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:42:23.188403   16242 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753 for IP: 192.168.39.201
	I1105 17:42:23.188425   16242 certs.go:194] generating shared ca certs ...
	I1105 17:42:23.188441   16242 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.188597   16242 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 17:42:23.341446   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt ...
	I1105 17:42:23.341474   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt: {Name:mkfa59703d59064c76459a190023e74d43463f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.341641   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key ...
	I1105 17:42:23.341651   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key: {Name:mk320346499bac546f45eab013d96c660693896c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.341727   16242 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 17:42:23.401273   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt ...
	I1105 17:42:23.401303   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt: {Name:mkeb19c5ec2a163cabde3019131e5181eee0cebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.401474   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key ...
	I1105 17:42:23.401485   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key: {Name:mkd319678fd41709a3afcd63022818e4ae49d586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.401562   16242 certs.go:256] generating profile certs ...
	I1105 17:42:23.401629   16242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.key
	I1105 17:42:23.401644   16242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt with IP's: []
	I1105 17:42:23.517708   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt ...
	I1105 17:42:23.517739   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: {Name:mk8d52f2bc368e6ca0bc29f008e577c6fe6ecf37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.517908   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.key ...
	I1105 17:42:23.517919   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.key: {Name:mk92c264006ac762b887e9eb89473082abebe2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.517989   16242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key.336631c6
	I1105 17:42:23.518008   16242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt.336631c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.201]
	I1105 17:42:23.667156   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt.336631c6 ...
	I1105 17:42:23.667192   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt.336631c6: {Name:mk7ebc684cac944e8b0f2b7b96848a9ee121ece3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.667377   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key.336631c6 ...
	I1105 17:42:23.667400   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key.336631c6: {Name:mk38d47f5b2c3b19a5d51c257838cd81b7f02bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.667496   16242 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt.336631c6 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt
	I1105 17:42:23.667590   16242 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key.336631c6 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key
	I1105 17:42:23.667656   16242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.key
	I1105 17:42:23.667683   16242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.crt with IP's: []
	I1105 17:42:23.763579   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.crt ...
	I1105 17:42:23.763607   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.crt: {Name:mk7dbb29d9695fc682c94fc54b468c1a836fe393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.763778   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.key ...
	I1105 17:42:23.763795   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.key: {Name:mkfa196ba85129499b12cdf30348ef0008a6cc9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.764001   16242 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 17:42:23.764036   16242 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 17:42:23.764057   16242 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 17:42:23.764081   16242 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 17:42:23.764649   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 17:42:23.788539   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 17:42:23.811287   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 17:42:23.833802   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 17:42:23.859222   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1105 17:42:23.889204   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 17:42:23.917639   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 17:42:23.939672   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 17:42:23.961687   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 17:42:23.983158   16242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 17:42:23.997758   16242 ssh_runner.go:195] Run: openssl version
	I1105 17:42:24.003219   16242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 17:42:24.013088   16242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:42:24.017100   16242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:42:24.017156   16242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:42:24.022521   16242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 17:42:24.032715   16242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 17:42:24.036549   16242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 17:42:24.036604   16242 kubeadm.go:392] StartCluster: {Name:addons-320753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:42:24.036686   16242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 17:42:24.036733   16242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 17:42:24.078130   16242 cri.go:89] found id: ""
	I1105 17:42:24.078210   16242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 17:42:24.087544   16242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 17:42:24.096356   16242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 17:42:24.105272   16242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 17:42:24.105292   16242 kubeadm.go:157] found existing configuration files:
	
	I1105 17:42:24.105331   16242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 17:42:24.113816   16242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 17:42:24.113891   16242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 17:42:24.122684   16242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 17:42:24.130880   16242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 17:42:24.130936   16242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 17:42:24.139808   16242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 17:42:24.148030   16242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 17:42:24.148077   16242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 17:42:24.156797   16242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 17:42:24.164925   16242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 17:42:24.164980   16242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 17:42:24.173486   16242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 17:42:24.312180   16242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 17:42:34.128892   16242 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 17:42:34.128970   16242 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 17:42:34.129061   16242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 17:42:34.129166   16242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 17:42:34.129262   16242 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 17:42:34.129379   16242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 17:42:34.131155   16242 out.go:235]   - Generating certificates and keys ...
	I1105 17:42:34.131249   16242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 17:42:34.131316   16242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 17:42:34.131418   16242 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 17:42:34.131497   16242 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 17:42:34.131582   16242 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 17:42:34.131676   16242 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 17:42:34.131775   16242 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 17:42:34.131945   16242 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-320753 localhost] and IPs [192.168.39.201 127.0.0.1 ::1]
	I1105 17:42:34.132025   16242 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 17:42:34.132172   16242 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-320753 localhost] and IPs [192.168.39.201 127.0.0.1 ::1]
	I1105 17:42:34.132268   16242 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 17:42:34.132365   16242 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 17:42:34.132430   16242 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 17:42:34.132506   16242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 17:42:34.132584   16242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 17:42:34.132683   16242 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 17:42:34.132766   16242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 17:42:34.132863   16242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 17:42:34.132931   16242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 17:42:34.133033   16242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 17:42:34.133127   16242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 17:42:34.134542   16242 out.go:235]   - Booting up control plane ...
	I1105 17:42:34.134648   16242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 17:42:34.134744   16242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 17:42:34.134847   16242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 17:42:34.135028   16242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 17:42:34.135146   16242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 17:42:34.135213   16242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 17:42:34.135348   16242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 17:42:34.135467   16242 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 17:42:34.135533   16242 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001156539s
	I1105 17:42:34.135624   16242 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 17:42:34.135704   16242 kubeadm.go:310] [api-check] The API server is healthy after 4.502039687s
	I1105 17:42:34.135796   16242 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 17:42:34.135923   16242 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 17:42:34.135976   16242 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 17:42:34.136122   16242 kubeadm.go:310] [mark-control-plane] Marking the node addons-320753 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 17:42:34.136175   16242 kubeadm.go:310] [bootstrap-token] Using token: s3vdam.ma6k0x78nxs5a20n
	I1105 17:42:34.137518   16242 out.go:235]   - Configuring RBAC rules ...
	I1105 17:42:34.137616   16242 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 17:42:34.137696   16242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 17:42:34.137849   16242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 17:42:34.138014   16242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 17:42:34.138116   16242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 17:42:34.138184   16242 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 17:42:34.138289   16242 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 17:42:34.138330   16242 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 17:42:34.138366   16242 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 17:42:34.138376   16242 kubeadm.go:310] 
	I1105 17:42:34.138433   16242 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 17:42:34.138438   16242 kubeadm.go:310] 
	I1105 17:42:34.138504   16242 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 17:42:34.138510   16242 kubeadm.go:310] 
	I1105 17:42:34.138532   16242 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 17:42:34.138630   16242 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 17:42:34.138714   16242 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 17:42:34.138725   16242 kubeadm.go:310] 
	I1105 17:42:34.138774   16242 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 17:42:34.138780   16242 kubeadm.go:310] 
	I1105 17:42:34.138837   16242 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 17:42:34.138846   16242 kubeadm.go:310] 
	I1105 17:42:34.138899   16242 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 17:42:34.138985   16242 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 17:42:34.139094   16242 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 17:42:34.139106   16242 kubeadm.go:310] 
	I1105 17:42:34.139229   16242 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 17:42:34.139343   16242 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 17:42:34.139354   16242 kubeadm.go:310] 
	I1105 17:42:34.139451   16242 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s3vdam.ma6k0x78nxs5a20n \
	I1105 17:42:34.139534   16242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 17:42:34.139552   16242 kubeadm.go:310] 	--control-plane 
	I1105 17:42:34.139558   16242 kubeadm.go:310] 
	I1105 17:42:34.139622   16242 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 17:42:34.139633   16242 kubeadm.go:310] 
	I1105 17:42:34.139716   16242 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s3vdam.ma6k0x78nxs5a20n \
	I1105 17:42:34.139836   16242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 17:42:34.139850   16242 cni.go:84] Creating CNI manager for ""
	I1105 17:42:34.139859   16242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 17:42:34.141306   16242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 17:42:34.142311   16242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 17:42:34.154653   16242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 17:42:34.176526   16242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 17:42:34.176597   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:34.176638   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-320753 minikube.k8s.io/updated_at=2024_11_05T17_42_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=addons-320753 minikube.k8s.io/primary=true
	I1105 17:42:34.321683   16242 ops.go:34] apiserver oom_adj: -16
	I1105 17:42:34.331120   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:34.831983   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:35.331325   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:35.832229   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:36.331211   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:36.832163   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:37.332233   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:37.831944   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:38.332194   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:38.423654   16242 kubeadm.go:1113] duration metric: took 4.2471119s to wait for elevateKubeSystemPrivileges
	I1105 17:42:38.423685   16242 kubeadm.go:394] duration metric: took 14.387085511s to StartCluster
	I1105 17:42:38.423701   16242 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:38.423831   16242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 17:42:38.424237   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:38.424439   16242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 17:42:38.424455   16242 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:42:38.424512   16242 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1105 17:42:38.424616   16242 addons.go:69] Setting yakd=true in profile "addons-320753"
	I1105 17:42:38.424631   16242 addons.go:69] Setting inspektor-gadget=true in profile "addons-320753"
	I1105 17:42:38.424642   16242 addons.go:69] Setting metrics-server=true in profile "addons-320753"
	I1105 17:42:38.424653   16242 addons.go:234] Setting addon metrics-server=true in "addons-320753"
	I1105 17:42:38.424647   16242 addons.go:69] Setting default-storageclass=true in profile "addons-320753"
	I1105 17:42:38.424658   16242 addons.go:234] Setting addon inspektor-gadget=true in "addons-320753"
	I1105 17:42:38.424669   16242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-320753"
	I1105 17:42:38.424674   16242 config.go:182] Loaded profile config "addons-320753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:42:38.424681   16242 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-320753"
	I1105 17:42:38.424701   16242 addons.go:69] Setting registry=true in profile "addons-320753"
	I1105 17:42:38.424709   16242 addons.go:69] Setting gcp-auth=true in profile "addons-320753"
	I1105 17:42:38.424714   16242 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-320753"
	I1105 17:42:38.424717   16242 addons.go:234] Setting addon registry=true in "addons-320753"
	I1105 17:42:38.424721   16242 addons.go:69] Setting ingress=true in profile "addons-320753"
	I1105 17:42:38.424732   16242 addons.go:69] Setting ingress-dns=true in profile "addons-320753"
	I1105 17:42:38.424743   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424682   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424750   16242 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-320753"
	I1105 17:42:38.424751   16242 addons.go:69] Setting cloud-spanner=true in profile "addons-320753"
	I1105 17:42:38.424766   16242 addons.go:234] Setting addon cloud-spanner=true in "addons-320753"
	I1105 17:42:38.424778   16242 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-320753"
	I1105 17:42:38.424798   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424800   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424691   16242 addons.go:69] Setting volcano=true in profile "addons-320753"
	I1105 17:42:38.425151   16242 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-320753"
	I1105 17:42:38.425164   16242 addons.go:234] Setting addon volcano=true in "addons-320753"
	I1105 17:42:38.425166   16242 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-320753"
	I1105 17:42:38.425169   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425178   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425187   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424696   16242 addons.go:69] Setting volumesnapshots=true in profile "addons-320753"
	I1105 17:42:38.425194   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425203   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.425220   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.424744   16242 addons.go:234] Setting addon ingress=true in "addons-320753"
	I1105 17:42:38.425328   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424703   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.425500   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425524   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.425705   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425735   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.425835   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.424662   16242 addons.go:69] Setting storage-provisioner=true in profile "addons-320753"
	I1105 17:42:38.425860   16242 addons.go:234] Setting addon storage-provisioner=true in "addons-320753"
	I1105 17:42:38.425880   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.425200   16242 addons.go:234] Setting addon volumesnapshots=true in "addons-320753"
	I1105 17:42:38.426020   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.425891   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.426194   16242 out.go:177] * Verifying Kubernetes components...
	I1105 17:42:38.425179   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.424637   16242 addons.go:234] Setting addon yakd=true in "addons-320753"
	I1105 17:42:38.426324   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.426350   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424727   16242 mustload.go:65] Loading cluster: addons-320753
	I1105 17:42:38.426387   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.426408   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.424744   16242 addons.go:234] Setting addon ingress-dns=true in "addons-320753"
	I1105 17:42:38.424687   16242 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-320753"
	I1105 17:42:38.424747   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.425143   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425185   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.425201   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.426508   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.426572   16242 config.go:182] Loaded profile config "addons-320753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:42:38.426597   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.426640   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.426849   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.426904   16242 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-320753"
	I1105 17:42:38.427020   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.427043   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.427116   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.427149   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.427289   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.427328   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.440872   16242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:42:38.446675   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I1105 17:42:38.448137   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I1105 17:42:38.449145   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I1105 17:42:38.451314   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.451346   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.451686   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.451708   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.451717   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.451739   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.452559   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I1105 17:42:38.452688   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.453172   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.453192   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.453259   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.453333   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.453813   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.453836   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.453956   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.453980   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.454034   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.454083   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.454493   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.454508   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.454552   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.454596   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.455020   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.455051   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.455197   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.455533   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.455559   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.463059   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I1105 17:42:38.471431   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.472559   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.472646   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.473019   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.473271   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1105 17:42:38.473625   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.473707   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.473778   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.474298   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.474319   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.474574   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.474666   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.475396   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I1105 17:42:38.475875   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.476412   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.476428   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.476766   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.476943   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.477500   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33659
	I1105 17:42:38.478921   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.479880   16242 addons.go:234] Setting addon default-storageclass=true in "addons-320753"
	I1105 17:42:38.479924   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.480269   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.480333   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.480606   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I1105 17:42:38.480934   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1105 17:42:38.481868   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33433
	I1105 17:42:38.482221   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1105 17:42:38.482240   16242 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1105 17:42:38.482258   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.483070   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.483528   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.483620   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.483639   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.484388   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.484990   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.485026   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.485130   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.485158   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.485546   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.486073   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.486109   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.486302   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.486377   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.486399   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.486658   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.486822   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.486928   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.487075   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.488308   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I1105 17:42:38.488828   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.489301   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.489329   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.489742   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.490297   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.490338   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.492799   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I1105 17:42:38.493245   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.493727   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.493746   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.494115   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.494270   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.495932   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.496296   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.496324   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.503428   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.503490   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.503696   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.503754   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.504366   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.505177   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.505206   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.505828   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.506448   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.506484   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.510920   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I1105 17:42:38.511547   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.512029   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.512058   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.512454   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.512637   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.513680   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46565
	I1105 17:42:38.514107   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.514449   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.514652   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.514675   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.514692   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I1105 17:42:38.515129   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.515129   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.515595   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.515620   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.515690   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.515738   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.515992   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.516634   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.516687   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.517160   16242 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1105 17:42:38.517161   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I1105 17:42:38.518234   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.518412   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1105 17:42:38.518430   16242 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1105 17:42:38.518451   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.518939   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.518958   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.519384   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.519953   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.519987   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.521383   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I1105 17:42:38.521932   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.522181   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.522341   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.522368   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.522618   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.522763   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.522862   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.522957   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.523498   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.523515   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.523865   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.524340   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.524380   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.526527   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I1105 17:42:38.526830   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.528004   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.528023   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.528330   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.528681   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.530164   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.532173   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45911
	I1105 17:42:38.532562   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.532900   16242 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1105 17:42:38.533405   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.533425   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.533826   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.534015   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.534180   16242 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1105 17:42:38.534196   16242 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1105 17:42:38.534215   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.538309   16242 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-320753"
	I1105 17:42:38.538355   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.538746   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.538783   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.539017   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.539077   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.539099   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.539116   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.539306   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.539475   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.539613   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.545834   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I1105 17:42:38.546277   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.546808   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.546836   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.547199   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.547379   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.547439   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I1105 17:42:38.547930   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.549002   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.549018   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.549254   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.549737   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I1105 17:42:38.550155   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.550410   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.551317   16242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1105 17:42:38.552290   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.552571   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:38.552592   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:38.552678   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.552897   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:38.552921   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:38.552935   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:38.552952   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:38.552964   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:38.553267   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:38.553279   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	W1105 17:42:38.553363   16242 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1105 17:42:38.553605   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.553629   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.553958   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.554832   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.554861   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.556003   16242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:42:38.557402   16242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:42:38.559246   16242 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:42:38.559273   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1105 17:42:38.559296   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.561156   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I1105 17:42:38.561939   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.563146   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.563375   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.563395   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.563855   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.563883   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.564036   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.564130   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.564178   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.564227   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.564564   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.564707   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.566291   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.567324   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I1105 17:42:38.567879   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.568546   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.568574   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.568756   16242 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1105 17:42:38.568979   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.569208   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I1105 17:42:38.569585   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.569609   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.569619   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.570042   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.570062   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.570412   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.570543   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.570790   16242 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:42:38.570807   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1105 17:42:38.570824   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.571935   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46725
	I1105 17:42:38.572403   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.572696   16242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 17:42:38.572712   16242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 17:42:38.572729   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.573675   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I1105 17:42:38.573966   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I1105 17:42:38.574117   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.574202   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.574569   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.574586   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.574624   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.574697   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.574712   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.575001   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.575272   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.575290   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.575346   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.575389   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.575568   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.575586   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.575588   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.575727   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.575946   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.576180   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.576266   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.576432   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.576612   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.577050   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.577357   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.577633   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.577649   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.577788   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.578375   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.578560   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.578612   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.579006   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.579735   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.579926   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.580155   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I1105 17:42:38.580492   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.580810   16242 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1105 17:42:38.580911   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.581316   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.580936   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I1105 17:42:38.581674   16242 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1105 17:42:38.581719   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.581720   16242 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1105 17:42:38.582009   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.582563   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.582579   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.582631   16242 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:42:38.582644   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1105 17:42:38.582663   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.582806   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.583130   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.583328   16242 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:42:38.583343   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1105 17:42:38.583366   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.583460   16242 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 17:42:38.583469   16242 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 17:42:38.583482   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.583628   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I1105 17:42:38.583932   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.584273   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.584405   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.584421   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.584777   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.584973   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.585408   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I1105 17:42:38.585785   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.586070   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.586202   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.586215   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.586554   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.586780   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.587070   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.587893   16242 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1105 17:42:38.587985   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.588900   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.588919   16242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 17:42:38.589266   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.589579   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.589584   16242 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1105 17:42:38.589910   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1105 17:42:38.589927   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.589608   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.589972   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.589973   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.589777   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.589992   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.590109   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.590156   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.590255   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.590282   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.590333   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1105 17:42:38.590407   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.590428   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.590727   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.591835   16242 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1105 17:42:38.591896   16242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:42:38.592327   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 17:42:38.592349   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.592976   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.592995   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.593171   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.593328   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.593347   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.593874   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.593994   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.594283   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.594356   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.594418   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.594612   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1105 17:42:38.594646   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.595032   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.595334   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.595425   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.595455   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.595522   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.595608   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.595632   16242 out.go:177]   - Using image docker.io/registry:2.8.3
	I1105 17:42:38.595661   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.596150   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I1105 17:42:38.595792   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.596382   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.596496   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.596515   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.596972   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.596992   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.597345   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.597361   16242 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1105 17:42:38.597371   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1105 17:42:38.597384   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.597529   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.598666   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1105 17:42:38.600134   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1105 17:42:38.600178   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.600489   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.600507   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.600657   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.600783   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.600883   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.600971   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	W1105 17:42:38.601647   16242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35742->192.168.39.201:22: read: connection reset by peer
	I1105 17:42:38.601672   16242 retry.go:31] will retry after 274.928015ms: ssh: handshake failed: read tcp 192.168.39.1:35742->192.168.39.201:22: read: connection reset by peer
	I1105 17:42:38.602720   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1105 17:42:38.603924   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1105 17:42:38.605233   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1105 17:42:38.606445   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1105 17:42:38.607585   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1105 17:42:38.607602   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1105 17:42:38.607622   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.610042   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36983
	I1105 17:42:38.610462   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.610485   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.610879   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.610904   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.611063   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.611081   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.611084   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.611255   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.611375   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.611422   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.611493   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.611798   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.613509   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.615147   16242 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1105 17:42:38.616371   16242 out.go:177]   - Using image docker.io/busybox:stable
	I1105 17:42:38.617748   16242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:42:38.617766   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1105 17:42:38.617781   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.620670   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.621034   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.621063   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.621179   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.621344   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.621461   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.621586   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	W1105 17:42:38.623064   16242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35752->192.168.39.201:22: read: connection reset by peer
	I1105 17:42:38.623087   16242 retry.go:31] will retry after 313.150416ms: ssh: handshake failed: read tcp 192.168.39.1:35752->192.168.39.201:22: read: connection reset by peer
	I1105 17:42:38.831755   16242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:42:38.831821   16242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 17:42:38.939670   16242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1105 17:42:38.939705   16242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1105 17:42:38.993717   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:42:38.995843   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:42:39.000929   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1105 17:42:39.000959   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1105 17:42:39.016483   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1105 17:42:39.036046   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:42:39.036386   16242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 17:42:39.036402   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1105 17:42:39.064834   16242 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:42:39.064856   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1105 17:42:39.085338   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:42:39.098827   16242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1105 17:42:39.098863   16242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1105 17:42:39.110123   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 17:42:39.123513   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1105 17:42:39.123533   16242 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1105 17:42:39.129464   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:42:39.163533   16242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 17:42:39.163562   16242 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 17:42:39.235428   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1105 17:42:39.235456   16242 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1105 17:42:39.251182   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:42:39.275268   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1105 17:42:39.275298   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1105 17:42:39.304573   16242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1105 17:42:39.304604   16242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1105 17:42:39.411870   16242 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1105 17:42:39.411895   16242 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1105 17:42:39.414585   16242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:42:39.414608   16242 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 17:42:39.430134   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1105 17:42:39.430157   16242 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1105 17:42:39.457143   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:42:39.465651   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1105 17:42:39.465682   16242 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1105 17:42:39.505770   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1105 17:42:39.505802   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1105 17:42:39.614995   16242 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:42:39.615022   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1105 17:42:39.625835   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:42:39.634157   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:42:39.634179   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1105 17:42:39.682706   16242 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:42:39.682729   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1105 17:42:39.820675   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1105 17:42:39.820708   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1105 17:42:39.850700   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:42:39.860512   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:42:39.864522   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:42:40.012426   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1105 17:42:40.012452   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1105 17:42:40.343010   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1105 17:42:40.343036   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1105 17:42:40.824129   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1105 17:42:40.824172   16242 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1105 17:42:41.052795   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1105 17:42:41.052824   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1105 17:42:41.264054   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1105 17:42:41.264100   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1105 17:42:41.268274   16242 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.436480669s)
	I1105 17:42:41.268303   16242 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.436444867s)
	I1105 17:42:41.268330   16242 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1105 17:42:41.269060   16242 node_ready.go:35] waiting up to 6m0s for node "addons-320753" to be "Ready" ...
	I1105 17:42:41.271853   16242 node_ready.go:49] node "addons-320753" has status "Ready":"True"
	I1105 17:42:41.271873   16242 node_ready.go:38] duration metric: took 2.794443ms for node "addons-320753" to be "Ready" ...
	I1105 17:42:41.271880   16242 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:42:41.286272   16242 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:41.633582   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:42:41.633610   16242 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1105 17:42:41.793607   16242 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-320753" context rescaled to 1 replicas
	I1105 17:42:41.900396   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:42:42.710143   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.716388624s)
	I1105 17:42:42.710149   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.714279541s)
	I1105 17:42:42.710185   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710196   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710211   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710198   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710207   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.693691101s)
	I1105 17:42:42.710259   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.674190516s)
	I1105 17:42:42.710307   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710318   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710274   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710369   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710649   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:42.710697   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.710711   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.710717   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.710730   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.710743   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710753   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710770   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.710720   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710805   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710878   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:42.710904   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.710928   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.710938   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710946   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.711116   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.711140   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.711149   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.711345   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:42.711379   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.711385   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.711442   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.711456   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.711506   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.711521   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.711664   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:42.711698   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.711706   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:43.299860   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:45.673110   16242 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1105 17:42:45.673148   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:45.676198   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:45.676614   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:45.676641   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:45.676787   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:45.677010   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:45.677159   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:45.677301   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:45.816774   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:46.132290   16242 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1105 17:42:46.213215   16242 addons.go:234] Setting addon gcp-auth=true in "addons-320753"
	I1105 17:42:46.213266   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:46.213551   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:46.213596   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:46.230632   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I1105 17:42:46.231157   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:46.231687   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:46.231712   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:46.232060   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:46.232577   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:46.232635   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:46.247412   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33273
	I1105 17:42:46.247868   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:46.248337   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:46.248361   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:46.248680   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:46.248883   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:46.250615   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:46.250853   16242 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1105 17:42:46.250881   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:46.254057   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:46.254468   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:46.254494   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:46.254627   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:46.254798   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:46.254925   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:46.255121   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:46.823271   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.737895626s)
	I1105 17:42:46.823326   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823325   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.713168748s)
	I1105 17:42:46.823340   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823357   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.693864304s)
	I1105 17:42:46.823363   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823414   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823420   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823428   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823456   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.572230892s)
	I1105 17:42:46.823486   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823495   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823529   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.366354008s)
	I1105 17:42:46.823548   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823558   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823615   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.197749552s)
	I1105 17:42:46.823621   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.823630   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.823634   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823639   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823642   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823647   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823715   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.972975222s)
	I1105 17:42:46.823743   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.823755   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.823755   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.823763   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823773   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823779   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.823789   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.823802   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.823810   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823812   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823817   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823820   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823817   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.963277221s)
	W1105 17:42:46.823848   16242 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:42:46.823870   16242 retry.go:31] will retry after 138.066697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:42:46.823910   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.959363058s)
	I1105 17:42:46.823928   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823938   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.824025   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.824035   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.824049   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.824052   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.824057   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.824067   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.824074   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.824082   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.824089   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.824108   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.824114   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.824124   16242 addons.go:475] Verifying addon ingress=true in "addons-320753"
	I1105 17:42:46.825276   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825289   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.825298   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.825305   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.825426   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.825448   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825452   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.825457   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.825463   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.825506   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.825525   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825531   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.825714   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.825741   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825748   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.825754   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.825756   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.825766   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.825780   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825787   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827521   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.827537   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827547   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827550   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827555   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.827557   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827562   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.827566   16242 addons.go:475] Verifying addon registry=true in "addons-320753"
	I1105 17:42:46.824075   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.827714   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.827751   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827758   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827861   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.827892   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827898   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827906   16242 addons.go:475] Verifying addon metrics-server=true in "addons-320753"
	I1105 17:42:46.827964   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827974   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.829096   16242 out.go:177] * Verifying ingress addon...
	I1105 17:42:46.829099   16242 out.go:177] * Verifying registry addon...
	I1105 17:42:46.829955   16242 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-320753 service yakd-dashboard -n yakd-dashboard
	
	I1105 17:42:46.831463   16242 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1105 17:42:46.831548   16242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1105 17:42:46.874728   16242 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1105 17:42:46.874748   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:46.874960   16242 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1105 17:42:46.874997   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:46.917868   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.917894   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.918207   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.918229   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.918206   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.962902   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:42:47.029646   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:47.029673   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:47.029925   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:47.029942   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:47.339789   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:47.342082   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:47.876517   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:47.877984   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:48.224724   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.324272206s)
	I1105 17:42:48.224769   16242 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.973891875s)
	I1105 17:42:48.224783   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:48.224800   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:48.225069   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:48.225116   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:48.225128   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:48.225146   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:48.225158   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:48.225483   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:48.225523   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:48.225534   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:48.225544   16242 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-320753"
	I1105 17:42:48.226471   16242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:42:48.227477   16242 out.go:177] * Verifying csi-hostpath-driver addon...
	I1105 17:42:48.229013   16242 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1105 17:42:48.229945   16242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1105 17:42:48.230306   16242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1105 17:42:48.230322   16242 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1105 17:42:48.257256   16242 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:42:48.257287   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:48.432386   16242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1105 17:42:48.432436   16242 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1105 17:42:48.571158   16242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:42:48.571178   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1105 17:42:48.643057   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:48.643560   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:48.643593   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:48.645257   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:42:48.736882   16242 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:42:48.736905   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:48.836753   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:48.837025   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:49.235227   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:49.336834   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:49.337040   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:49.593589   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.630637429s)
	I1105 17:42:49.593679   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:49.593701   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:49.593954   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:49.594003   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:49.594012   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:49.594027   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:49.594038   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:49.594264   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:49.594354   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:49.594333   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:49.734240   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:49.835643   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:49.837798   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:50.266957   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.621667243s)
	I1105 17:42:50.267016   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:50.267028   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:50.267336   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:50.267423   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:50.267436   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:50.267441   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:50.267385   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:50.267683   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:50.267711   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:50.267731   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:50.269277   16242 addons.go:475] Verifying addon gcp-auth=true in "addons-320753"
	I1105 17:42:50.270615   16242 out.go:177] * Verifying gcp-auth addon...
	I1105 17:42:50.272317   16242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1105 17:42:50.276491   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:50.331287   16242 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1105 17:42:50.331312   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:50.377841   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:50.379460   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:50.735645   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:50.776530   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:50.794327   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:50.836976   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:50.837648   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:51.234194   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:51.275258   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:51.335749   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:51.336014   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:51.735421   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:51.775611   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:51.836086   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:51.836139   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:52.235326   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:52.275333   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:52.336692   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:52.336903   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:52.736046   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:52.776676   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:52.836413   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:52.836566   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:53.234811   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:53.276119   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:53.292238   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:53.335683   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:53.335951   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:53.769023   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:53.775288   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:53.836112   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:53.836730   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:54.235475   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:54.275345   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:54.336377   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:54.336537   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:54.735997   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:54.776269   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:54.836185   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:54.837552   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:55.234424   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:55.275538   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:55.292306   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:55.336423   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:55.337175   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:55.734667   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:55.776047   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:55.835750   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:55.835804   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:56.235572   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:56.275623   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:56.335640   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:56.335872   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:56.735321   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:56.775389   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:56.835252   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:56.836011   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:57.234933   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:57.275869   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:57.335178   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:57.335956   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:57.734759   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:57.776045   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:57.792995   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:57.835707   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:57.836176   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:58.234717   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:58.276284   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:58.336492   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:58.336808   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:58.734536   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:58.775801   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:58.835025   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:58.836260   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:59.235499   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:59.276959   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:59.336252   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:59.338089   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:59.734617   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:59.775238   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:59.836920   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:59.837454   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:00.234893   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:00.275485   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:00.292287   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:00.336371   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:00.336676   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:00.733707   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:00.776336   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:00.835145   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:00.835758   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:01.235488   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:01.275432   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:01.337132   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:01.337508   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:01.907767   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:01.907892   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:01.908389   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:01.908571   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:02.234946   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:02.276157   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:02.292555   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:02.336126   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:02.336536   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:02.735728   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:02.787821   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:02.842833   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:02.843031   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:03.234905   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:03.276407   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:03.335608   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:03.336375   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:03.735388   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:03.775578   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:03.835768   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:03.836327   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:04.424542   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:04.424689   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:04.425038   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:04.426802   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:04.427187   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:04.734396   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:04.776538   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:04.835880   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:04.836343   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:05.233949   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:05.276140   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:05.335791   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:05.336588   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:05.734061   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:05.776329   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:05.836032   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:05.836409   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:06.742384   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:06.742635   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:06.742810   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:06.743372   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:06.745170   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:06.746926   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:06.775729   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:06.836589   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:06.837260   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:07.234842   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:07.275788   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:07.335492   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:07.336574   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:07.734537   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:07.786311   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:07.839865   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:07.841620   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:08.235593   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:08.276478   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:08.335745   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:08.335953   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:08.734462   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:08.775753   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:08.791674   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:08.835533   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:08.836021   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:09.236428   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:09.275984   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:09.336100   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:09.336392   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:09.734604   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:09.775645   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:09.835851   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:09.836253   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:10.235035   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:10.276329   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:10.334739   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:10.335401   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:10.734878   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:10.776739   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:10.791881   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:10.835352   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:10.835541   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:11.234803   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:11.275748   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:11.336792   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:11.337187   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:11.735502   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:11.776022   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:11.837307   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:11.837775   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:12.235757   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:12.276143   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:12.335085   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:12.335459   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:12.735550   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:12.775748   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:12.792324   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:12.836177   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:12.836388   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:13.234745   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:13.275326   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:13.335383   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:13.336454   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:13.734339   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:13.776408   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:13.835862   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:13.836316   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:14.233665   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:14.276099   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:14.335771   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:14.336042   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:14.735327   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:14.775256   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:14.793855   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:14.835810   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:14.835879   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:15.235569   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:15.275760   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:15.336516   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:15.336557   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:15.734818   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:15.775762   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:15.835929   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:15.836626   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:16.234842   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:16.276090   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:16.336038   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:16.336403   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:16.733561   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:16.775721   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:16.837265   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:16.837895   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:17.234989   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:17.275933   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:17.292950   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:17.334893   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:17.336385   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:17.734468   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:17.775885   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:17.835670   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:17.836007   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:18.237865   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:18.276172   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:18.292198   16242 pod_ready.go:93] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.292225   16242 pod_ready.go:82] duration metric: took 37.005905525s for pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.292242   16242 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-67h67" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.293922   16242 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-67h67" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-67h67" not found
	I1105 17:43:18.293948   16242 pod_ready.go:82] duration metric: took 1.697844ms for pod "coredns-7c65d6cfc9-67h67" in "kube-system" namespace to be "Ready" ...
	E1105 17:43:18.293960   16242 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-67h67" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-67h67" not found
	I1105 17:43:18.293970   16242 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cttxl" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.298198   16242 pod_ready.go:93] pod "coredns-7c65d6cfc9-cttxl" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.298222   16242 pod_ready.go:82] duration metric: took 4.243824ms for pod "coredns-7c65d6cfc9-cttxl" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.298234   16242 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.302198   16242 pod_ready.go:93] pod "etcd-addons-320753" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.302219   16242 pod_ready.go:82] duration metric: took 3.976888ms for pod "etcd-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.302226   16242 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.307705   16242 pod_ready.go:93] pod "kube-apiserver-addons-320753" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.307724   16242 pod_ready.go:82] duration metric: took 5.49182ms for pod "kube-apiserver-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.307732   16242 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.335237   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:18.335455   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:18.489772   16242 pod_ready.go:93] pod "kube-controller-manager-addons-320753" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.489794   16242 pod_ready.go:82] duration metric: took 182.055769ms for pod "kube-controller-manager-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.489805   16242 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-24n9l" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.734369   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:18.775914   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:18.836651   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:18.836707   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:18.890045   16242 pod_ready.go:93] pod "kube-proxy-24n9l" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.890069   16242 pod_ready.go:82] duration metric: took 400.25624ms for pod "kube-proxy-24n9l" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.890082   16242 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:19.235572   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:19.290301   16242 pod_ready.go:93] pod "kube-scheduler-addons-320753" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:19.290329   16242 pod_ready.go:82] duration metric: took 400.238241ms for pod "kube-scheduler-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:19.290343   16242 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rgxmq" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:19.335627   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:19.336070   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:19.336184   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:19.690855   16242 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rgxmq" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:19.690879   16242 pod_ready.go:82] duration metric: took 400.528046ms for pod "nvidia-device-plugin-daemonset-rgxmq" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:19.690887   16242 pod_ready.go:39] duration metric: took 38.418998496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:43:19.690904   16242 api_server.go:52] waiting for apiserver process to appear ...
	I1105 17:43:19.690992   16242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 17:43:19.710532   16242 api_server.go:72] duration metric: took 41.286043118s to wait for apiserver process to appear ...
	I1105 17:43:19.710557   16242 api_server.go:88] waiting for apiserver healthz status ...
	I1105 17:43:19.710575   16242 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I1105 17:43:19.714752   16242 api_server.go:279] https://192.168.39.201:8443/healthz returned 200:
	ok
	I1105 17:43:19.715745   16242 api_server.go:141] control plane version: v1.31.2
	I1105 17:43:19.715766   16242 api_server.go:131] duration metric: took 5.203361ms to wait for apiserver health ...
	I1105 17:43:19.715774   16242 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 17:43:19.734449   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:19.776054   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:19.835917   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:19.836229   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:19.896414   16242 system_pods.go:59] 18 kube-system pods found
	I1105 17:43:19.896448   16242 system_pods.go:61] "amd-gpu-device-plugin-h5b9p" [012ac43a-bb0b-4a85-91d7-47b7b36eb7c3] Running
	I1105 17:43:19.896457   16242 system_pods.go:61] "coredns-7c65d6cfc9-cttxl" [2478e920-f380-4190-bc39-00c34d84a86f] Running
	I1105 17:43:19.896466   16242 system_pods.go:61] "csi-hostpath-attacher-0" [07c0442e-f739-45c1-bce1-70dba665cbba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1105 17:43:19.896474   16242 system_pods.go:61] "csi-hostpath-resizer-0" [53cca88c-38b8-486f-ac5b-b155d7a0fcbd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1105 17:43:19.896484   16242 system_pods.go:61] "csi-hostpathplugin-ssdqg" [55586e10-8074-4b16-8197-d3b8dfeb30fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1105 17:43:19.896491   16242 system_pods.go:61] "etcd-addons-320753" [f97557d4-2f51-4ec7-bd14-c47c64cee30b] Running
	I1105 17:43:19.896497   16242 system_pods.go:61] "kube-apiserver-addons-320753" [a127d10c-37ed-4d05-a8f7-f8e855bcf716] Running
	I1105 17:43:19.896506   16242 system_pods.go:61] "kube-controller-manager-addons-320753" [0ddb9a92-e16b-45ea-9eb2-2033d2795283] Running
	I1105 17:43:19.896516   16242 system_pods.go:61] "kube-ingress-dns-minikube" [1eba0773-5303-4096-98b4-0e8258855ad4] Running
	I1105 17:43:19.896522   16242 system_pods.go:61] "kube-proxy-24n9l" [64cb0df5-d57b-4782-bae7-4ac5639dc01e] Running
	I1105 17:43:19.896527   16242 system_pods.go:61] "kube-scheduler-addons-320753" [3de149a1-916c-48c9-8f62-f76e0c1682e5] Running
	I1105 17:43:19.896536   16242 system_pods.go:61] "metrics-server-84c5f94fbc-khd9b" [5c9668b9-1b38-4b29-a16b-750ee7a74276] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 17:43:19.896542   16242 system_pods.go:61] "nvidia-device-plugin-daemonset-rgxmq" [20281175-a7ec-44e4-a0f9-e0dd96dfe10c] Running
	I1105 17:43:19.896551   16242 system_pods.go:61] "registry-66c9cd494c-xtz7j" [549ed7b1-2983-4fca-8715-25afc280c616] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1105 17:43:19.896562   16242 system_pods.go:61] "registry-proxy-k2wqh" [b9f4e07d-8955-4605-8ecd-360952c67ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1105 17:43:19.896571   16242 system_pods.go:61] "snapshot-controller-56fcc65765-6rhm5" [955e4299-ba79-4530-8ebe-78c35525b9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1105 17:43:19.896578   16242 system_pods.go:61] "snapshot-controller-56fcc65765-kh6t8" [24c4c41d-37d5-45b9-a1db-f0a70d94983b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1105 17:43:19.896584   16242 system_pods.go:61] "storage-provisioner" [1ee0e5cc-73a4-44dc-9637-8dbfd1e52030] Running
	I1105 17:43:19.896592   16242 system_pods.go:74] duration metric: took 180.811688ms to wait for pod list to return data ...
	I1105 17:43:19.896603   16242 default_sa.go:34] waiting for default service account to be created ...
	I1105 17:43:20.090566   16242 default_sa.go:45] found service account: "default"
	I1105 17:43:20.090591   16242 default_sa.go:55] duration metric: took 193.98171ms for default service account to be created ...
	I1105 17:43:20.090603   16242 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 17:43:20.234897   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:20.275567   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:20.298643   16242 system_pods.go:86] 18 kube-system pods found
	I1105 17:43:20.298681   16242 system_pods.go:89] "amd-gpu-device-plugin-h5b9p" [012ac43a-bb0b-4a85-91d7-47b7b36eb7c3] Running
	I1105 17:43:20.298690   16242 system_pods.go:89] "coredns-7c65d6cfc9-cttxl" [2478e920-f380-4190-bc39-00c34d84a86f] Running
	I1105 17:43:20.298700   16242 system_pods.go:89] "csi-hostpath-attacher-0" [07c0442e-f739-45c1-bce1-70dba665cbba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1105 17:43:20.298710   16242 system_pods.go:89] "csi-hostpath-resizer-0" [53cca88c-38b8-486f-ac5b-b155d7a0fcbd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1105 17:43:20.298720   16242 system_pods.go:89] "csi-hostpathplugin-ssdqg" [55586e10-8074-4b16-8197-d3b8dfeb30fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1105 17:43:20.298731   16242 system_pods.go:89] "etcd-addons-320753" [f97557d4-2f51-4ec7-bd14-c47c64cee30b] Running
	I1105 17:43:20.298737   16242 system_pods.go:89] "kube-apiserver-addons-320753" [a127d10c-37ed-4d05-a8f7-f8e855bcf716] Running
	I1105 17:43:20.298746   16242 system_pods.go:89] "kube-controller-manager-addons-320753" [0ddb9a92-e16b-45ea-9eb2-2033d2795283] Running
	I1105 17:43:20.298756   16242 system_pods.go:89] "kube-ingress-dns-minikube" [1eba0773-5303-4096-98b4-0e8258855ad4] Running
	I1105 17:43:20.298761   16242 system_pods.go:89] "kube-proxy-24n9l" [64cb0df5-d57b-4782-bae7-4ac5639dc01e] Running
	I1105 17:43:20.298769   16242 system_pods.go:89] "kube-scheduler-addons-320753" [3de149a1-916c-48c9-8f62-f76e0c1682e5] Running
	I1105 17:43:20.298780   16242 system_pods.go:89] "metrics-server-84c5f94fbc-khd9b" [5c9668b9-1b38-4b29-a16b-750ee7a74276] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 17:43:20.298788   16242 system_pods.go:89] "nvidia-device-plugin-daemonset-rgxmq" [20281175-a7ec-44e4-a0f9-e0dd96dfe10c] Running
	I1105 17:43:20.298796   16242 system_pods.go:89] "registry-66c9cd494c-xtz7j" [549ed7b1-2983-4fca-8715-25afc280c616] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1105 17:43:20.298807   16242 system_pods.go:89] "registry-proxy-k2wqh" [b9f4e07d-8955-4605-8ecd-360952c67ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1105 17:43:20.298819   16242 system_pods.go:89] "snapshot-controller-56fcc65765-6rhm5" [955e4299-ba79-4530-8ebe-78c35525b9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1105 17:43:20.298831   16242 system_pods.go:89] "snapshot-controller-56fcc65765-kh6t8" [24c4c41d-37d5-45b9-a1db-f0a70d94983b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1105 17:43:20.298839   16242 system_pods.go:89] "storage-provisioner" [1ee0e5cc-73a4-44dc-9637-8dbfd1e52030] Running
	I1105 17:43:20.298852   16242 system_pods.go:126] duration metric: took 208.242321ms to wait for k8s-apps to be running ...
	I1105 17:43:20.298869   16242 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 17:43:20.298924   16242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 17:43:20.337714   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:20.338291   16242 system_svc.go:56] duration metric: took 39.420489ms WaitForService to wait for kubelet
	I1105 17:43:20.338316   16242 kubeadm.go:582] duration metric: took 41.913831742s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:43:20.338338   16242 node_conditions.go:102] verifying NodePressure condition ...
	I1105 17:43:20.338867   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:20.490641   16242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 17:43:20.490679   16242 node_conditions.go:123] node cpu capacity is 2
	I1105 17:43:20.490694   16242 node_conditions.go:105] duration metric: took 152.350003ms to run NodePressure ...
	I1105 17:43:20.490710   16242 start.go:241] waiting for startup goroutines ...
	I1105 17:43:20.735417   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:20.776609   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:20.836737   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:20.837483   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:21.516893   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:21.517444   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:21.517601   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:21.517622   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:21.734697   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:21.775267   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:21.836927   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:21.837000   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:22.237498   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:22.276242   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:22.336335   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:22.336712   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:22.735311   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:22.777478   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:22.835918   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:22.836713   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:23.235792   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:23.275882   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:23.335114   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:23.335821   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:23.735214   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:23.776555   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:23.836855   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:23.837155   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:24.234587   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:24.276213   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:24.335551   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:24.335889   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:24.735682   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:24.836925   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:24.837753   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:24.838084   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:25.235351   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:25.279001   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:25.335340   16242 kapi.go:107] duration metric: took 38.503790715s to wait for kubernetes.io/minikube-addons=registry ...
	I1105 17:43:25.335381   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:25.734751   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:25.775712   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:25.836377   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:26.235414   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:26.277511   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:26.336378   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:26.735178   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:26.775642   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:26.836374   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:27.277781   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:27.280021   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:27.374319   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:27.736322   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:27.776499   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:27.835641   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:28.235274   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:28.277209   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:28.335634   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:28.735415   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:28.776430   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:28.836392   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:29.235685   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:29.335167   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:29.335880   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:29.733802   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:29.776095   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:29.835570   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:30.234679   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:30.275938   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:30.335948   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:30.735107   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:30.775934   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:30.835899   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:31.235286   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:31.275401   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:31.336311   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:31.733931   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:31.775976   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:31.835019   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:32.617048   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:32.617408   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:32.617491   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:32.734595   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:32.775721   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:32.836126   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:33.234844   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:33.275260   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:33.341650   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:33.733926   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:33.776135   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:33.835365   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:34.234778   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:34.275583   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:34.341124   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:34.735148   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:34.775321   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:34.835081   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:35.235237   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:35.275971   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:35.335403   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:35.734779   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:35.776149   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:35.836383   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:36.235572   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:36.276197   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:36.335465   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:36.734433   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:36.775848   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:36.836575   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:37.238267   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:37.287482   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:37.343317   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:37.736234   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:37.776391   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:37.837067   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:38.235433   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:38.276266   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:38.336785   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:38.734784   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:38.776063   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:38.840576   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:39.235255   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:39.276266   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:39.335711   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:39.734964   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:39.777541   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:39.836094   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:40.235381   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:40.275778   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:40.335081   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:40.734912   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:40.775976   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:40.835314   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:41.234208   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:41.275657   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:41.336259   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:41.736488   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:41.835170   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:41.836362   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:42.233869   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:42.276058   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:42.336029   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:42.735010   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:42.834504   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:42.836741   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:43.238878   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:43.276158   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:43.335593   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:43.735236   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:43.776408   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:43.835658   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:44.235012   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:44.276738   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:44.336173   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:44.735149   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:44.777270   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:44.837506   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:45.235439   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:45.275676   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:45.335810   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:45.734610   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:45.775964   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:45.836061   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:46.234278   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:46.275667   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:46.336084   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:46.734821   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:46.777538   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:46.837517   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:47.234335   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:47.275245   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:47.335841   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:47.888925   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:47.890110   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:47.896165   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:48.236423   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:48.277204   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:48.340814   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:48.742310   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:48.781231   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:48.882171   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:49.236084   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:49.277147   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:49.336915   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:49.736596   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:49.776208   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:49.838031   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:50.237134   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:50.276784   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:50.335605   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:50.734370   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:50.775840   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:50.835453   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:51.241678   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:51.278304   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:51.335932   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:51.734656   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:51.834642   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:51.835988   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:52.235187   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:52.334825   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:52.337329   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:52.735471   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:52.778320   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:52.837125   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:53.235745   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:53.277211   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:53.335819   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:53.735030   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:53.775490   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:53.836269   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:54.599314   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:54.599744   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:54.600519   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:54.734695   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:54.775281   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:54.835210   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:55.238965   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:55.341963   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:55.342220   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:55.734333   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:55.775480   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:55.835484   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:56.234809   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:56.275859   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:56.335335   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:56.734076   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:56.776546   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:57.220875   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:57.320037   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:57.320089   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:57.335828   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:57.735134   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:57.776893   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:57.835745   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:58.234653   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:58.275544   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:58.336238   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:58.735567   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:58.779153   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:58.835910   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:59.236581   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:59.275989   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:59.335442   16242 kapi.go:107] duration metric: took 1m12.50397447s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1105 17:43:59.734474   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:59.776236   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:00.389254   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:00.488820   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:00.735574   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:00.775774   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:01.234935   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:01.276555   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:01.734870   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:01.776142   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:02.234589   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:02.276661   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:02.734911   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:02.776353   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:03.234482   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:03.275668   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:03.735063   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:03.776470   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:04.236117   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:04.334267   16242 kapi.go:107] duration metric: took 1m14.061946319s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1105 17:44:04.336055   16242 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-320753 cluster.
	I1105 17:44:04.337817   16242 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1105 17:44:04.339179   16242 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1105 17:44:04.735117   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:05.235358   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:05.739476   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:06.234866   16242 kapi.go:107] duration metric: took 1m18.004919144s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1105 17:44:06.236747   16242 out.go:177] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, storage-provisioner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1105 17:44:06.238082   16242 addons.go:510] duration metric: took 1m27.813567554s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns storage-provisioner nvidia-device-plugin inspektor-gadget metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1105 17:44:06.238126   16242 start.go:246] waiting for cluster config update ...
	I1105 17:44:06.238149   16242 start.go:255] writing updated cluster config ...
	I1105 17:44:06.238736   16242 ssh_runner.go:195] Run: rm -f paused
	I1105 17:44:06.288800   16242 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 17:44:06.290635   16242 out.go:177] * Done! kubectl is now configured to use "addons-320753" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.182359476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828849182329806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594745,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=173301b2-8853-4af5-b846-a45b35d311c8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.183155624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2037d0f-88bd-413e-95ca-0ecd180ed30a name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.183222587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2037d0f-88bd-413e-95ca-0ecd180ed30a name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.183644348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cec3dd2fd1269de0f11405b4100a2e7acb250053135b5b6d4035614dfbaaed5d,PodSandboxId:1346002c1f6a74c3ab7eb285587c90ab9d33d98534ac34753b88204fc0cb2a17,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730828710663714183,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4adce59-2101-44a5-bcc1-53c27718456c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9fe762f3082313fe72e3170b77aa50956917693f6b18b58ce5c6e39ce86fa4,PodSandboxId:741d09d08bf73084bf0e9117584aac959f61b591d7fbffc766483f1f5ca3b8af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730828650675769940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c68727b-d745-4759-85fb-537736d0c04a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251293ae787de4bec03f5dc0c796ad29b7b7b07378b98f5b4fd56b24f8610bd9,PodSandboxId:31046b4a30845d064621eae56ed5b6dbab0594e466b6e6f0ea9e24685e6882a7,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730828638597298412,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-cmh5r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139cff7b-da68-4689-b30d-db8b2f78601d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:07d744177af5b683ae801cac8f435655e4d361788a5d82aac10a504db4b55783,PodSandboxId:d01120044a0e2a7a2ecb812d94b477ca32ae3a3cf4e20b9dfff2cc6b2b2d68c2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730828625542250594,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-knwwm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cea67ea7-887d-42b2-bf98-4575f6df2b53,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f3e3c5c9f563981d74758148000483b5c77c7f3ab42ac9d62737fb764cdb30,PodSandboxId:6ff0de49c585d306ab87da47108293578ccd32980b56fd54da1f7e9f62a82540,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730828624998640941,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9hwqj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c33ddd18-be84-46df-935b-db470914225b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33323a81892f4375c6cc05afd9b326f6e53f4ac782a0313cf67e8e715e34cd7,PodSandboxId:7330784d967378fb460a3ac8683e62b3425e9db40c7b1a80d51a154afda5639a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730828606311775112,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-khd9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9668b9-1b38-4b29-a16b-750ee7a74276,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0487aadaec9bd775dfd01e3d25e94f79418bc3e4e7b5297afb76b628a76f9131,PodSandboxId:1784a3a31f6659fdefd4e533e6987064775fc9f5ce8ac1b7e3473eb8dbeefec4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730828597546118608,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h5b9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012ac43a-bb0b-4a85-91d7-47b7b36eb7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1e06ef7e49c1a193e4375d04d8fda6441b933afedc8978f0b1bdf28513193f,PodSandboxId:468829bf456c7aa9318abcea8a652c590d07672b1a6fef7b5c3bc3300d7b650f,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730828574506710735,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eba0773-5303-4096-98b4-0e8258855ad4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee,PodSandboxId:66f71f6fb3fc29789
850be79773283d3391863635e6a6eda20082662161df53a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730828563822400290,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee0e5cc-73a4-44dc-9637-8dbfd1e52030,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1,PodSandboxId:f784a16ce173d9967cb1ebbb97614
acab17f82a7c7b7bed794b86af9249e2446,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730828562291340879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cttxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2478e920-f380-4190-bc39-00c34d84a86f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346,PodSandboxId:b2f5ff6e95dfeb8852a5bbd53ee22940a099fa2a3cb48edc6b4bd38fef9c3f10,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730828559575755109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24n9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cb0df5-d57b-4782-bae7-4ac5639dc01e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a,PodSandboxId:356da5ed5f56d6cdf965d434988e98f2ea4c48d52ac8d905b9415e188934147a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730828548475236815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1401c4598f2e3dfc80febc83d26bd72,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8,PodSandboxId:718e09d3d1bf36d15904b46155f0c6aaeda36ff2881306a06fc43a8771b9e61a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730828548464806702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d09859585694c955c161417e3cd2061,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c,PodSandboxId:99b3e36649d9d135df7afae49b460f9b918a70235f490ac024c5232e34ffeb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730828548459917560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2510abf723755cf16e6c080513cf1135,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d,PodSandboxId:c6b5c3d1a21b77bb05b0336bff301bcbb0cbda0b76d745f50b8b1196ee6fead7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730828548455543238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d481b44bde15a13310363b908cd76a45,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=f2037d0f-88bd-413e-95ca-0ecd180ed30a name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.218991926Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3570743-730e-41da-9a94-19bc6b44bd82 name=/runtime.v1.RuntimeService/Version
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.219144965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3570743-730e-41da-9a94-19bc6b44bd82 name=/runtime.v1.RuntimeService/Version
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.220751302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9086536b-6725-4a54-9518-20366b0e8f34 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.222504089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828849222466504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594745,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9086536b-6725-4a54-9518-20366b0e8f34 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.223227123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2aad1eb4-fd0d-4a57-a47b-c053b37a2c03 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.223304950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2aad1eb4-fd0d-4a57-a47b-c053b37a2c03 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.223636062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cec3dd2fd1269de0f11405b4100a2e7acb250053135b5b6d4035614dfbaaed5d,PodSandboxId:1346002c1f6a74c3ab7eb285587c90ab9d33d98534ac34753b88204fc0cb2a17,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730828710663714183,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4adce59-2101-44a5-bcc1-53c27718456c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9fe762f3082313fe72e3170b77aa50956917693f6b18b58ce5c6e39ce86fa4,PodSandboxId:741d09d08bf73084bf0e9117584aac959f61b591d7fbffc766483f1f5ca3b8af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730828650675769940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c68727b-d745-4759-85fb-537736d0c04a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251293ae787de4bec03f5dc0c796ad29b7b7b07378b98f5b4fd56b24f8610bd9,PodSandboxId:31046b4a30845d064621eae56ed5b6dbab0594e466b6e6f0ea9e24685e6882a7,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730828638597298412,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-cmh5r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139cff7b-da68-4689-b30d-db8b2f78601d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:07d744177af5b683ae801cac8f435655e4d361788a5d82aac10a504db4b55783,PodSandboxId:d01120044a0e2a7a2ecb812d94b477ca32ae3a3cf4e20b9dfff2cc6b2b2d68c2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730828625542250594,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-knwwm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cea67ea7-887d-42b2-bf98-4575f6df2b53,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f3e3c5c9f563981d74758148000483b5c77c7f3ab42ac9d62737fb764cdb30,PodSandboxId:6ff0de49c585d306ab87da47108293578ccd32980b56fd54da1f7e9f62a82540,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730828624998640941,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9hwqj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c33ddd18-be84-46df-935b-db470914225b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33323a81892f4375c6cc05afd9b326f6e53f4ac782a0313cf67e8e715e34cd7,PodSandboxId:7330784d967378fb460a3ac8683e62b3425e9db40c7b1a80d51a154afda5639a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730828606311775112,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-khd9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9668b9-1b38-4b29-a16b-750ee7a74276,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0487aadaec9bd775dfd01e3d25e94f79418bc3e4e7b5297afb76b628a76f9131,PodSandboxId:1784a3a31f6659fdefd4e533e6987064775fc9f5ce8ac1b7e3473eb8dbeefec4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730828597546118608,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h5b9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012ac43a-bb0b-4a85-91d7-47b7b36eb7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1e06ef7e49c1a193e4375d04d8fda6441b933afedc8978f0b1bdf28513193f,PodSandboxId:468829bf456c7aa9318abcea8a652c590d07672b1a6fef7b5c3bc3300d7b650f,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730828574506710735,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eba0773-5303-4096-98b4-0e8258855ad4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee,PodSandboxId:66f71f6fb3fc29789
850be79773283d3391863635e6a6eda20082662161df53a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730828563822400290,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee0e5cc-73a4-44dc-9637-8dbfd1e52030,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1,PodSandboxId:f784a16ce173d9967cb1ebbb97614
acab17f82a7c7b7bed794b86af9249e2446,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730828562291340879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cttxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2478e920-f380-4190-bc39-00c34d84a86f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346,PodSandboxId:b2f5ff6e95dfeb8852a5bbd53ee22940a099fa2a3cb48edc6b4bd38fef9c3f10,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730828559575755109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24n9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cb0df5-d57b-4782-bae7-4ac5639dc01e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a,PodSandboxId:356da5ed5f56d6cdf965d434988e98f2ea4c48d52ac8d905b9415e188934147a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730828548475236815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1401c4598f2e3dfc80febc83d26bd72,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8,PodSandboxId:718e09d3d1bf36d15904b46155f0c6aaeda36ff2881306a06fc43a8771b9e61a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730828548464806702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d09859585694c955c161417e3cd2061,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c,PodSandboxId:99b3e36649d9d135df7afae49b460f9b918a70235f490ac024c5232e34ffeb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730828548459917560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2510abf723755cf16e6c080513cf1135,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d,PodSandboxId:c6b5c3d1a21b77bb05b0336bff301bcbb0cbda0b76d745f50b8b1196ee6fead7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730828548455543238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d481b44bde15a13310363b908cd76a45,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=2aad1eb4-fd0d-4a57-a47b-c053b37a2c03 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.260877579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1fac2fa-6e9b-445c-ada8-65fbba22fb4c name=/runtime.v1.RuntimeService/Version
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.260976657Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1fac2fa-6e9b-445c-ada8-65fbba22fb4c name=/runtime.v1.RuntimeService/Version
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.262367248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99e59cca-b838-4b77-ae49-821f4459b9f8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.264829401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828849264699409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594745,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99e59cca-b838-4b77-ae49-821f4459b9f8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.265726764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d192c30-282c-47cb-a9c9-12628b5a7307 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.265783998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d192c30-282c-47cb-a9c9-12628b5a7307 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.266388747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cec3dd2fd1269de0f11405b4100a2e7acb250053135b5b6d4035614dfbaaed5d,PodSandboxId:1346002c1f6a74c3ab7eb285587c90ab9d33d98534ac34753b88204fc0cb2a17,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730828710663714183,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4adce59-2101-44a5-bcc1-53c27718456c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9fe762f3082313fe72e3170b77aa50956917693f6b18b58ce5c6e39ce86fa4,PodSandboxId:741d09d08bf73084bf0e9117584aac959f61b591d7fbffc766483f1f5ca3b8af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730828650675769940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c68727b-d745-4759-85fb-537736d0c04a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251293ae787de4bec03f5dc0c796ad29b7b7b07378b98f5b4fd56b24f8610bd9,PodSandboxId:31046b4a30845d064621eae56ed5b6dbab0594e466b6e6f0ea9e24685e6882a7,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730828638597298412,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-cmh5r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139cff7b-da68-4689-b30d-db8b2f78601d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:07d744177af5b683ae801cac8f435655e4d361788a5d82aac10a504db4b55783,PodSandboxId:d01120044a0e2a7a2ecb812d94b477ca32ae3a3cf4e20b9dfff2cc6b2b2d68c2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730828625542250594,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-knwwm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cea67ea7-887d-42b2-bf98-4575f6df2b53,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f3e3c5c9f563981d74758148000483b5c77c7f3ab42ac9d62737fb764cdb30,PodSandboxId:6ff0de49c585d306ab87da47108293578ccd32980b56fd54da1f7e9f62a82540,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730828624998640941,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9hwqj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c33ddd18-be84-46df-935b-db470914225b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33323a81892f4375c6cc05afd9b326f6e53f4ac782a0313cf67e8e715e34cd7,PodSandboxId:7330784d967378fb460a3ac8683e62b3425e9db40c7b1a80d51a154afda5639a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730828606311775112,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-khd9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9668b9-1b38-4b29-a16b-750ee7a74276,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0487aadaec9bd775dfd01e3d25e94f79418bc3e4e7b5297afb76b628a76f9131,PodSandboxId:1784a3a31f6659fdefd4e533e6987064775fc9f5ce8ac1b7e3473eb8dbeefec4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730828597546118608,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h5b9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012ac43a-bb0b-4a85-91d7-47b7b36eb7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1e06ef7e49c1a193e4375d04d8fda6441b933afedc8978f0b1bdf28513193f,PodSandboxId:468829bf456c7aa9318abcea8a652c590d07672b1a6fef7b5c3bc3300d7b650f,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730828574506710735,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eba0773-5303-4096-98b4-0e8258855ad4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee,PodSandboxId:66f71f6fb3fc29789
850be79773283d3391863635e6a6eda20082662161df53a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730828563822400290,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee0e5cc-73a4-44dc-9637-8dbfd1e52030,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1,PodSandboxId:f784a16ce173d9967cb1ebbb97614
acab17f82a7c7b7bed794b86af9249e2446,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730828562291340879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cttxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2478e920-f380-4190-bc39-00c34d84a86f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346,PodSandboxId:b2f5ff6e95dfeb8852a5bbd53ee22940a099fa2a3cb48edc6b4bd38fef9c3f10,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730828559575755109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24n9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cb0df5-d57b-4782-bae7-4ac5639dc01e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a,PodSandboxId:356da5ed5f56d6cdf965d434988e98f2ea4c48d52ac8d905b9415e188934147a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730828548475236815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1401c4598f2e3dfc80febc83d26bd72,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8,PodSandboxId:718e09d3d1bf36d15904b46155f0c6aaeda36ff2881306a06fc43a8771b9e61a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730828548464806702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d09859585694c955c161417e3cd2061,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c,PodSandboxId:99b3e36649d9d135df7afae49b460f9b918a70235f490ac024c5232e34ffeb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730828548459917560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2510abf723755cf16e6c080513cf1135,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d,PodSandboxId:c6b5c3d1a21b77bb05b0336bff301bcbb0cbda0b76d745f50b8b1196ee6fead7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730828548455543238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d481b44bde15a13310363b908cd76a45,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=2d192c30-282c-47cb-a9c9-12628b5a7307 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.303904559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a7d87c2-310f-4870-b3f4-2423a138c4e3 name=/runtime.v1.RuntimeService/Version
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.304019364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a7d87c2-310f-4870-b3f4-2423a138c4e3 name=/runtime.v1.RuntimeService/Version
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.305177356Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48dd78d7-a794-4d50-ae8c-20c15b0f3e53 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.307130072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828849307092438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594745,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48dd78d7-a794-4d50-ae8c-20c15b0f3e53 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.307763980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=687554e2-e049-4919-9e94-03beae33a6ff name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.307840572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=687554e2-e049-4919-9e94-03beae33a6ff name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:47:29 addons-320753 crio[660]: time="2024-11-05 17:47:29.308295057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cec3dd2fd1269de0f11405b4100a2e7acb250053135b5b6d4035614dfbaaed5d,PodSandboxId:1346002c1f6a74c3ab7eb285587c90ab9d33d98534ac34753b88204fc0cb2a17,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730828710663714183,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4adce59-2101-44a5-bcc1-53c27718456c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9fe762f3082313fe72e3170b77aa50956917693f6b18b58ce5c6e39ce86fa4,PodSandboxId:741d09d08bf73084bf0e9117584aac959f61b591d7fbffc766483f1f5ca3b8af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730828650675769940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c68727b-d745-4759-85fb-537736d0c04a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251293ae787de4bec03f5dc0c796ad29b7b7b07378b98f5b4fd56b24f8610bd9,PodSandboxId:31046b4a30845d064621eae56ed5b6dbab0594e466b6e6f0ea9e24685e6882a7,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730828638597298412,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-cmh5r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 139cff7b-da68-4689-b30d-db8b2f78601d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:07d744177af5b683ae801cac8f435655e4d361788a5d82aac10a504db4b55783,PodSandboxId:d01120044a0e2a7a2ecb812d94b477ca32ae3a3cf4e20b9dfff2cc6b2b2d68c2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730828625542250594,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-knwwm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cea67ea7-887d-42b2-bf98-4575f6df2b53,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f3e3c5c9f563981d74758148000483b5c77c7f3ab42ac9d62737fb764cdb30,PodSandboxId:6ff0de49c585d306ab87da47108293578ccd32980b56fd54da1f7e9f62a82540,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730828624998640941,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9hwqj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c33ddd18-be84-46df-935b-db470914225b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33323a81892f4375c6cc05afd9b326f6e53f4ac782a0313cf67e8e715e34cd7,PodSandboxId:7330784d967378fb460a3ac8683e62b3425e9db40c7b1a80d51a154afda5639a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730828606311775112,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-khd9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c9668b9-1b38-4b29-a16b-750ee7a74276,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0487aadaec9bd775dfd01e3d25e94f79418bc3e4e7b5297afb76b628a76f9131,PodSandboxId:1784a3a31f6659fdefd4e533e6987064775fc9f5ce8ac1b7e3473eb8dbeefec4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730828597546118608,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h5b9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012ac43a-bb0b-4a85-91d7-47b7b36eb7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1e06ef7e49c1a193e4375d04d8fda6441b933afedc8978f0b1bdf28513193f,PodSandboxId:468829bf456c7aa9318abcea8a652c590d07672b1a6fef7b5c3bc3300d7b650f,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730828574506710735,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eba0773-5303-4096-98b4-0e8258855ad4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee,PodSandboxId:66f71f6fb3fc29789
850be79773283d3391863635e6a6eda20082662161df53a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730828563822400290,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee0e5cc-73a4-44dc-9637-8dbfd1e52030,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1,PodSandboxId:f784a16ce173d9967cb1ebbb97614
acab17f82a7c7b7bed794b86af9249e2446,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730828562291340879,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cttxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2478e920-f380-4190-bc39-00c34d84a86f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346,PodSandboxId:b2f5ff6e95dfeb8852a5bbd53ee22940a099fa2a3cb48edc6b4bd38fef9c3f10,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730828559575755109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24n9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cb0df5-d57b-4782-bae7-4ac5639dc01e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a,PodSandboxId:356da5ed5f56d6cdf965d434988e98f2ea4c48d52ac8d905b9415e188934147a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730828548475236815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1401c4598f2e3dfc80febc83d26bd72,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8,PodSandboxId:718e09d3d1bf36d15904b46155f0c6aaeda36ff2881306a06fc43a8771b9e61a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730828548464806702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d09859585694c955c161417e3cd2061,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c,PodSandboxId:99b3e36649d9d135df7afae49b460f9b918a70235f490ac024c5232e34ffeb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730828548459917560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2510abf723755cf16e6c080513cf1135,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d,PodSandboxId:c6b5c3d1a21b77bb05b0336bff301bcbb0cbda0b76d745f50b8b1196ee6fead7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730828548455543238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d481b44bde15a13310363b908cd76a45,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=687554e2-e049-4919-9e94-03beae33a6ff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cec3dd2fd1269       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   1346002c1f6a7       nginx
	6e9fe762f3082       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   741d09d08bf73       busybox
	251293ae787de       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   31046b4a30845       ingress-nginx-controller-5f85ff4588-cmh5r
	07d744177af5b       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   d01120044a0e2       ingress-nginx-admission-patch-knwwm
	34f3e3c5c9f56       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   6ff0de49c585d       ingress-nginx-admission-create-9hwqj
	d33323a81892f       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   7330784d96737       metrics-server-84c5f94fbc-khd9b
	0487aadaec9bd       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   1784a3a31f665       amd-gpu-device-plugin-h5b9p
	bb1e06ef7e49c       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   468829bf456c7       kube-ingress-dns-minikube
	c7bda9b1ee1f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   66f71f6fb3fc2       storage-provisioner
	3adb7b81c7581       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   f784a16ce173d       coredns-7c65d6cfc9-cttxl
	feda2f7d89c62       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago       Running             kube-proxy                0                   b2f5ff6e95dfe       kube-proxy-24n9l
	cadd2623fa524       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   356da5ed5f56d       kube-apiserver-addons-320753
	9d1b0135e5cf4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   718e09d3d1bf3       kube-controller-manager-addons-320753
	a8df3d0592491       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   99b3e36649d9d       etcd-addons-320753
	467f16dcbd4a7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   c6b5c3d1a21b7       kube-scheduler-addons-320753
	
	
	==> coredns [3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1] <==
	[INFO] 10.244.0.8:41889 - 31769 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000125676s
	[INFO] 10.244.0.8:41889 - 2375 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000159007s
	[INFO] 10.244.0.8:41889 - 3047 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000097255s
	[INFO] 10.244.0.8:41889 - 62459 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000095721s
	[INFO] 10.244.0.8:41889 - 14162 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000094355s
	[INFO] 10.244.0.8:41889 - 23244 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000095198s
	[INFO] 10.244.0.8:41889 - 21741 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000103847s
	[INFO] 10.244.0.8:49861 - 2415 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001146s
	[INFO] 10.244.0.8:49861 - 2721 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000103428s
	[INFO] 10.244.0.8:59071 - 24527 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008545s
	[INFO] 10.244.0.8:59071 - 24758 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034218s
	[INFO] 10.244.0.8:41162 - 64325 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095412s
	[INFO] 10.244.0.8:41162 - 64553 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066615s
	[INFO] 10.244.0.8:57194 - 18297 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109917s
	[INFO] 10.244.0.8:57194 - 18490 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114135s
	[INFO] 10.244.0.23:36800 - 53958 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000997003s
	[INFO] 10.244.0.23:38073 - 8671 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000286715s
	[INFO] 10.244.0.23:35163 - 15820 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000204861s
	[INFO] 10.244.0.23:54476 - 33737 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000181467s
	[INFO] 10.244.0.23:56866 - 55277 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000210248s
	[INFO] 10.244.0.23:37936 - 12235 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000178474s
	[INFO] 10.244.0.23:34381 - 37096 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000778893s
	[INFO] 10.244.0.23:47061 - 25398 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001523892s
	[INFO] 10.244.0.27:48910 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000489898s
	[INFO] 10.244.0.27:41992 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143211s
	
	
	==> describe nodes <==
	Name:               addons-320753
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-320753
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=addons-320753
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T17_42_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-320753
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 17:42:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-320753
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 17:47:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 17:45:37 +0000   Tue, 05 Nov 2024 17:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 17:45:37 +0000   Tue, 05 Nov 2024 17:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 17:45:37 +0000   Tue, 05 Nov 2024 17:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 17:45:37 +0000   Tue, 05 Nov 2024 17:42:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    addons-320753
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 519d04ef668a4324b1894f66ef22ec87
	  System UUID:                519d04ef-668a-4324-b189-4f66ef22ec87
	  Boot ID:                    84d65ca6-e314-4af0-a328-03b507c1d577
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     hello-world-app-55bf9c44b4-gmrtj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-cmh5r    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m43s
	  kube-system                 amd-gpu-device-plugin-h5b9p                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 coredns-7c65d6cfc9-cttxl                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m51s
	  kube-system                 etcd-addons-320753                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m56s
	  kube-system                 kube-apiserver-addons-320753                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-controller-manager-addons-320753        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-proxy-24n9l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-scheduler-addons-320753                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 metrics-server-84c5f94fbc-khd9b              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m46s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m48s                kube-proxy       
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node addons-320753 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node addons-320753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node addons-320753 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m56s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m56s                kubelet          Node addons-320753 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s                kubelet          Node addons-320753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s                kubelet          Node addons-320753 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m55s                kubelet          Node addons-320753 status is now: NodeReady
	  Normal  RegisteredNode           4m52s                node-controller  Node addons-320753 event: Registered Node addons-320753 in Controller
	  Normal  CIDRAssignmentFailed     4m52s                cidrAllocator    Node addons-320753 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.082762] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.342454] systemd-fstab-generator[1332]: Ignoring "noauto" option for root device
	[  +0.147995] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.103080] kauditd_printk_skb: 139 callbacks suppressed
	[  +5.032916] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.657212] kauditd_printk_skb: 71 callbacks suppressed
	[Nov 5 17:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.478901] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.310188] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.272035] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.868464] kauditd_printk_skb: 53 callbacks suppressed
	[  +7.874820] kauditd_printk_skb: 45 callbacks suppressed
	[Nov 5 17:44] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.737351] kauditd_printk_skb: 14 callbacks suppressed
	[ +23.480672] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.341328] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.004968] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.016302] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 5 17:45] kauditd_printk_skb: 61 callbacks suppressed
	[  +7.425473] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.630895] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.317463] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.784176] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.868482] kauditd_printk_skb: 7 callbacks suppressed
	[Nov 5 17:47] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c] <==
	{"level":"info","ts":"2024-11-05T17:43:57.203556Z","caller":"traceutil/trace.go:171","msg":"trace[1068852426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1123; }","duration":"245.619405ms","start":"2024-11-05T17:43:56.957931Z","end":"2024-11-05T17:43:57.203551Z","steps":["trace[1068852426] 'agreement among raft nodes before linearized reading'  (duration: 245.581582ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:43:57.203744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.022987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2024-11-05T17:43:57.203776Z","caller":"traceutil/trace.go:171","msg":"trace[2043905244] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1123; }","duration":"166.055948ms","start":"2024-11-05T17:43:57.037714Z","end":"2024-11-05T17:43:57.203770Z","steps":["trace[2043905244] 'agreement among raft nodes before linearized reading'  (duration: 165.947529ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:44:00.372026Z","caller":"traceutil/trace.go:171","msg":"trace[118910243] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"450.741229ms","start":"2024-11-05T17:43:59.921218Z","end":"2024-11-05T17:44:00.371959Z","steps":["trace[118910243] 'process raft request'  (duration: 450.628192ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:44:00.372172Z","caller":"traceutil/trace.go:171","msg":"trace[1757073185] linearizableReadLoop","detail":"{readStateIndex:1170; appliedIndex:1170; }","duration":"152.50069ms","start":"2024-11-05T17:44:00.219520Z","end":"2024-11-05T17:44:00.372021Z","steps":["trace[1757073185] 'read index received'  (duration: 152.491158ms)","trace[1757073185] 'applied index is now lower than readState.Index'  (duration: 8.28µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T17:44:00.372327Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.800471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:44:00.372394Z","caller":"traceutil/trace.go:171","msg":"trace[897402176] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1139; }","duration":"152.883215ms","start":"2024-11-05T17:44:00.219502Z","end":"2024-11-05T17:44:00.372386Z","steps":["trace[897402176] 'agreement among raft nodes before linearized reading'  (duration: 152.742893ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:44:00.372365Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T17:43:59.921200Z","time spent":"451.00298ms","remote":"127.0.0.1:41134","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1127 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-11-05T17:44:00.376411Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.71625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:44:00.376499Z","caller":"traceutil/trace.go:171","msg":"trace[526389934] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"114.813676ms","start":"2024-11-05T17:44:00.261677Z","end":"2024-11-05T17:44:00.376490Z","steps":["trace[526389934] 'agreement among raft nodes before linearized reading'  (duration: 114.634909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:44:00.377127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.970584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:44:00.377214Z","caller":"traceutil/trace.go:171","msg":"trace[553349832] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1140; }","duration":"105.060143ms","start":"2024-11-05T17:44:00.272142Z","end":"2024-11-05T17:44:00.377202Z","steps":["trace[553349832] 'agreement among raft nodes before linearized reading'  (duration: 104.950122ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:44:43.941424Z","caller":"traceutil/trace.go:171","msg":"trace[295256943] transaction","detail":"{read_only:false; response_revision:1349; number_of_response:1; }","duration":"273.369648ms","start":"2024-11-05T17:44:43.667986Z","end":"2024-11-05T17:44:43.941356Z","steps":["trace[295256943] 'process raft request'  (duration: 273.143357ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:45:14.558583Z","caller":"traceutil/trace.go:171","msg":"trace[1926740521] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"149.725581ms","start":"2024-11-05T17:45:14.408822Z","end":"2024-11-05T17:45:14.558548Z","steps":["trace[1926740521] 'process raft request'  (duration: 149.529941ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:45:22.641510Z","caller":"traceutil/trace.go:171","msg":"trace[146489829] linearizableReadLoop","detail":"{readStateIndex:1700; appliedIndex:1699; }","duration":"217.533791ms","start":"2024-11-05T17:45:22.423962Z","end":"2024-11-05T17:45:22.641496Z","steps":["trace[146489829] 'read index received'  (duration: 217.430655ms)","trace[146489829] 'applied index is now lower than readState.Index'  (duration: 102.657µs)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:45:22.641601Z","caller":"traceutil/trace.go:171","msg":"trace[2035047528] transaction","detail":"{read_only:false; response_revision:1643; number_of_response:1; }","duration":"229.008433ms","start":"2024-11-05T17:45:22.412587Z","end":"2024-11-05T17:45:22.641595Z","steps":["trace[2035047528] 'process raft request'  (duration: 228.799372ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:45:22.641813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.788954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-11-05T17:45:22.641836Z","caller":"traceutil/trace.go:171","msg":"trace[797036277] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1643; }","duration":"217.871626ms","start":"2024-11-05T17:45:22.423959Z","end":"2024-11-05T17:45:22.641830Z","steps":["trace[797036277] 'agreement among raft nodes before linearized reading'  (duration: 217.721677ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:45:22.641886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.180323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:45:22.641919Z","caller":"traceutil/trace.go:171","msg":"trace[2145014752] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1643; }","duration":"172.231623ms","start":"2024-11-05T17:45:22.469679Z","end":"2024-11-05T17:45:22.641911Z","steps":["trace[2145014752] 'agreement among raft nodes before linearized reading'  (duration: 172.16511ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:45:22.642085Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.697881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/pvc-dc83c679-ddcc-4681-bf85-ba96348fe5e0\" ","response":"range_response_count:1 size:1262"}
	{"level":"info","ts":"2024-11-05T17:45:22.642109Z","caller":"traceutil/trace.go:171","msg":"trace[924296525] range","detail":"{range_begin:/registry/persistentvolumes/pvc-dc83c679-ddcc-4681-bf85-ba96348fe5e0; range_end:; response_count:1; response_revision:1643; }","duration":"100.773533ms","start":"2024-11-05T17:45:22.541328Z","end":"2024-11-05T17:45:22.642102Z","steps":["trace[924296525] 'agreement among raft nodes before linearized reading'  (duration: 100.662206ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:45:48.274296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.405991ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10689173857718937377 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/csi-snapshotter-role\" mod_revision:779 > success:<request_delete_range:<key:\"/registry/clusterrolebindings/csi-snapshotter-role\" > > failure:<request_range:<key:\"/registry/clusterrolebindings/csi-snapshotter-role\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-11-05T17:45:48.274371Z","caller":"traceutil/trace.go:171","msg":"trace[1908162290] linearizableReadLoop","detail":"{readStateIndex:1873; appliedIndex:1872; }","duration":"213.986536ms","start":"2024-11-05T17:45:48.060376Z","end":"2024-11-05T17:45:48.274362Z","steps":["trace[1908162290] 'read index received'  (duration: 9.156139ms)","trace[1908162290] 'applied index is now lower than readState.Index'  (duration: 204.829491ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:45:48.274430Z","caller":"traceutil/trace.go:171","msg":"trace[1992322215] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1808; }","duration":"276.892309ms","start":"2024-11-05T17:45:47.997532Z","end":"2024-11-05T17:45:48.274425Z","steps":["trace[1992322215] 'process raft request'  (duration: 72.042473ms)","trace[1992322215] 'compare'  (duration: 203.964119ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:47:29 up 5 min,  0 users,  load average: 0.58, 1.33, 0.73
	Linux addons-320753 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1105 17:44:34.193797       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.36.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.36.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.36.76:443: connect: connection refused" logger="UnhandledError"
	I1105 17:44:34.217751       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1105 17:44:39.081724       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.14.226"}
	I1105 17:45:02.324546       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1105 17:45:03.461159       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1105 17:45:07.955429       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1105 17:45:08.148611       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.52.203"}
	E1105 17:45:14.690739       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1105 17:45:30.537276       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1105 17:45:47.012417       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.012521       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:45:47.027835       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.027893       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:45:47.077695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.078304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:45:47.147537       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.147590       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:45:47.155716       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.155765       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1105 17:45:48.148573       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1105 17:45:48.156505       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1105 17:45:48.163105       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1105 17:47:28.210389       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.162.23"}
	
	
	==> kube-controller-manager [9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8] <==
	I1105 17:46:07.902491       1 shared_informer.go:320] Caches are synced for resource quota
	I1105 17:46:08.332784       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1105 17:46:08.332886       1 shared_informer.go:320] Caches are synced for garbage collector
	W1105 17:46:09.775001       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:46:09.775169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:46:12.955594       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:46:12.955716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:46:20.466634       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:46:20.466929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:46:21.580761       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:46:21.580896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:46:23.233235       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:46:23.233282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:46:55.964233       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:46:55.964334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:46:58.139968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:46:58.140144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:47:01.314851       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:47:01.314971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:47:09.727503       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:47:09.727613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1105 17:47:28.019829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.389951ms"
	I1105 17:47:28.031548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.653435ms"
	I1105 17:47:28.032239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="55.273µs"
	I1105 17:47:28.038431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="429.739µs"
	
	
	==> kube-proxy [feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 17:42:40.585927       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 17:42:40.622102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.201"]
	E1105 17:42:40.622182       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 17:42:40.729298       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 17:42:40.729328       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 17:42:40.729362       1 server_linux.go:169] "Using iptables Proxier"
	I1105 17:42:40.732599       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 17:42:40.732874       1 server.go:483] "Version info" version="v1.31.2"
	I1105 17:42:40.732888       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 17:42:40.734817       1 config.go:199] "Starting service config controller"
	I1105 17:42:40.734846       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 17:42:40.734863       1 config.go:105] "Starting endpoint slice config controller"
	I1105 17:42:40.734867       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 17:42:40.735285       1 config.go:328] "Starting node config controller"
	I1105 17:42:40.735309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 17:42:40.835726       1 shared_informer.go:320] Caches are synced for node config
	I1105 17:42:40.835755       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 17:42:40.835762       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d] <==
	W1105 17:42:30.864854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 17:42:30.865074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.716092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 17:42:31.716194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.741307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 17:42:31.741350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.821771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 17:42:31.821861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.875213       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 17:42:31.875309       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1105 17:42:31.898655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 17:42:31.898779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.918456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1105 17:42:31.918824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.962822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1105 17:42:31.962870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.967187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 17:42:31.967265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:32.099391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 17:42:32.099444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:32.177329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 17:42:32.177377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:32.177465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1105 17:42:32.177493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1105 17:42:33.856885       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 17:47:23 addons-320753 kubelet[1204]: E1105 17:47:23.809103    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828843808743242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594745,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:47:23 addons-320753 kubelet[1204]: E1105 17:47:23.809157    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828843808743242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594745,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011444    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955e4299-ba79-4530-8ebe-78c35525b9de" containerName="volume-snapshot-controller"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011493    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="csi-provisioner"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011504    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="csi-snapshotter"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011510    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="53cca88c-38b8-486f-ac5b-b155d7a0fcbd" containerName="csi-resizer"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011520    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07c0442e-f739-45c1-bce1-70dba665cbba" containerName="csi-attacher"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011526    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="hostpath"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011532    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="csi-external-health-monitor-controller"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011538    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="node-driver-registrar"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011544    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24c4c41d-37d5-45b9-a1db-f0a70d94983b" containerName="volume-snapshot-controller"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011549    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69731bf9-840a-4d23-aa3c-f8dca02e4628" containerName="task-pv-container"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: E1105 17:47:28.011555    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="liveness-probe"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011601    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c0442e-f739-45c1-bce1-70dba665cbba" containerName="csi-attacher"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011608    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="hostpath"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011615    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="53cca88c-38b8-486f-ac5b-b155d7a0fcbd" containerName="csi-resizer"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011620    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="csi-external-health-monitor-controller"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011626    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="69731bf9-840a-4d23-aa3c-f8dca02e4628" containerName="task-pv-container"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011631    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c4c41d-37d5-45b9-a1db-f0a70d94983b" containerName="volume-snapshot-controller"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011635    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="955e4299-ba79-4530-8ebe-78c35525b9de" containerName="volume-snapshot-controller"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011640    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="csi-provisioner"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011645    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="csi-snapshotter"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011649    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="node-driver-registrar"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.011654    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="55586e10-8074-4b16-8197-d3b8dfeb30fd" containerName="liveness-probe"
	Nov 05 17:47:28 addons-320753 kubelet[1204]: I1105 17:47:28.087731    1204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8xcl\" (UniqueName: \"kubernetes.io/projected/69f94676-d85f-4400-8899-ebaf3c04f092-kube-api-access-f8xcl\") pod \"hello-world-app-55bf9c44b4-gmrtj\" (UID: \"69f94676-d85f-4400-8899-ebaf3c04f092\") " pod="default/hello-world-app-55bf9c44b4-gmrtj"
	
	
	==> storage-provisioner [c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee] <==
	I1105 17:42:44.546376       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 17:42:44.632589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 17:42:44.632654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 17:42:44.690370       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 17:42:44.690525       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-320753_00115da2-6d14-4553-8f96-a127f1403bf1!
	I1105 17:42:44.690591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66faf5da-69be-4e5b-a7e0-be6255ac4b49", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-320753_00115da2-6d14-4553-8f96-a127f1403bf1 became leader
	I1105 17:42:44.896190       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-320753_00115da2-6d14-4553-8f96-a127f1403bf1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-320753 -n addons-320753
helpers_test.go:261: (dbg) Run:  kubectl --context addons-320753 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-gmrtj ingress-nginx-admission-create-9hwqj ingress-nginx-admission-patch-knwwm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-320753 describe pod hello-world-app-55bf9c44b4-gmrtj ingress-nginx-admission-create-9hwqj ingress-nginx-admission-patch-knwwm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-320753 describe pod hello-world-app-55bf9c44b4-gmrtj ingress-nginx-admission-create-9hwqj ingress-nginx-admission-patch-knwwm: exit status 1 (66.124868ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-gmrtj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-320753/192.168.39.201
	Start Time:       Tue, 05 Nov 2024 17:47:28 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f8xcl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f8xcl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-gmrtj to addons-320753
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9hwqj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-knwwm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-320753 describe pod hello-world-app-55bf9c44b4-gmrtj ingress-nginx-admission-create-9hwqj ingress-nginx-admission-patch-knwwm: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-320753 addons disable ingress-dns --alsologtostderr -v=1: (1.357706676s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-320753 addons disable ingress --alsologtostderr -v=1: (7.789712768s)
--- FAIL: TestAddons/parallel/Ingress (151.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (331.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.399879ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-khd9b" [5c9668b9-1b38-4b29-a16b-750ee7a74276] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003468039s
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (77.22248ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 2m21.466939036s

                                                
                                                
** /stderr **
I1105 17:45:01.468880   15492 retry.go:31] will retry after 4.241721693s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (65.448148ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 2m25.775598139s

                                                
                                                
** /stderr **
I1105 17:45:05.777315   15492 retry.go:31] will retry after 4.775729108s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (76.42161ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 2m30.627582683s

                                                
                                                
** /stderr **
I1105 17:45:10.629809   15492 retry.go:31] will retry after 5.589893638s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (66.27365ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 2m36.285243216s

                                                
                                                
** /stderr **
I1105 17:45:16.287023   15492 retry.go:31] will retry after 9.15382884s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (68.349295ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 2m45.508914462s

                                                
                                                
** /stderr **
I1105 17:45:25.510414   15492 retry.go:31] will retry after 18.591665228s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (64.466706ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 3m4.165134768s

                                                
                                                
** /stderr **
I1105 17:45:44.166880   15492 retry.go:31] will retry after 12.429875767s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (61.857924ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 3m16.657542971s

                                                
                                                
** /stderr **
I1105 17:45:56.659350   15492 retry.go:31] will retry after 30.11752392s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (64.565305ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 3m46.840536879s

                                                
                                                
** /stderr **
I1105 17:46:26.842747   15492 retry.go:31] will retry after 44.385840061s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (62.319575ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 4m31.291921918s

                                                
                                                
** /stderr **
I1105 17:47:11.294046   15492 retry.go:31] will retry after 40.653706836s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (62.459497ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 5m12.012770119s

                                                
                                                
** /stderr **
I1105 17:47:52.014578   15492 retry.go:31] will retry after 41.54432975s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (62.990206ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 5m53.625975858s

                                                
                                                
** /stderr **
I1105 17:48:33.627989   15492 retry.go:31] will retry after 58.928736377s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (64.576717ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 6m52.620285239s

                                                
                                                
** /stderr **
I1105 17:49:32.622038   15492 retry.go:31] will retry after 52.060611972s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-320753 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-320753 top pods -n kube-system: exit status 1 (62.369386ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-h5b9p, age: 7m44.751099374s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-320753 -n addons-320753
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-320753 logs -n 25: (1.155475092s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-753477                                                                     | download-only-753477 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| delete  | -p download-only-083264                                                                     | download-only-083264 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-133090 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | binary-mirror-133090                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38161                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-133090                                                                     | binary-mirror-133090 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| addons  | disable dashboard -p                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | addons-320753                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | addons-320753                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-320753 --wait=true                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:44 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | -p addons-320753                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-320753 ip                                                                            | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-320753 ssh cat                                                                       | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | /opt/local-path-provisioner/pvc-dc83c679-ddcc-4681-bf85-ba96348fe5e0_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:45 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:45 UTC | 05 Nov 24 17:45 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-320753 ssh curl -s                                                                   | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:45 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:45 UTC | 05 Nov 24 17:45 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-320753 addons                                                                        | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:45 UTC | 05 Nov 24 17:45 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-320753 ip                                                                            | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:47 UTC | 05 Nov 24 17:47 UTC |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:47 UTC | 05 Nov 24 17:47 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-320753 addons disable                                                                | addons-320753        | jenkins | v1.34.0 | 05 Nov 24 17:47 UTC | 05 Nov 24 17:47 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:41:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:41:54.631172   16242 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:41:54.631269   16242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:54.631276   16242 out.go:358] Setting ErrFile to fd 2...
	I1105 17:41:54.631280   16242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:54.631441   16242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 17:41:54.632028   16242 out.go:352] Setting JSON to false
	I1105 17:41:54.632921   16242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1457,"bootTime":1730827058,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 17:41:54.632977   16242 start.go:139] virtualization: kvm guest
	I1105 17:41:54.634993   16242 out.go:177] * [addons-320753] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 17:41:54.636266   16242 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 17:41:54.636281   16242 notify.go:220] Checking for updates...
	I1105 17:41:54.638838   16242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:41:54.640171   16242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 17:41:54.641374   16242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 17:41:54.642502   16242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 17:41:54.643629   16242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 17:41:54.644809   16242 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:41:54.675700   16242 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 17:41:54.677002   16242 start.go:297] selected driver: kvm2
	I1105 17:41:54.677018   16242 start.go:901] validating driver "kvm2" against <nil>
	I1105 17:41:54.677034   16242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 17:41:54.677732   16242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:41:54.677818   16242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 17:41:54.692490   16242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 17:41:54.692552   16242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:41:54.692803   16242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:41:54.692836   16242 cni.go:84] Creating CNI manager for ""
	I1105 17:41:54.692874   16242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 17:41:54.692882   16242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 17:41:54.692933   16242 start.go:340] cluster config:
	{Name:addons-320753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:41:54.693018   16242 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:41:54.695468   16242 out.go:177] * Starting "addons-320753" primary control-plane node in "addons-320753" cluster
	I1105 17:41:54.696549   16242 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:41:54.696582   16242 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 17:41:54.696590   16242 cache.go:56] Caching tarball of preloaded images
	I1105 17:41:54.696667   16242 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 17:41:54.696680   16242 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 17:41:54.696963   16242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/config.json ...
	I1105 17:41:54.696983   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/config.json: {Name:mk664197e3260b062aa2572735b9e61ad88cd4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:54.697122   16242 start.go:360] acquireMachinesLock for addons-320753: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 17:41:54.697163   16242 start.go:364] duration metric: took 29.509µs to acquireMachinesLock for "addons-320753"
	I1105 17:41:54.697179   16242 start.go:93] Provisioning new machine with config: &{Name:addons-320753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:41:54.697225   16242 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 17:41:54.698582   16242 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1105 17:41:54.698706   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:41:54.698739   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:41:54.712411   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I1105 17:41:54.712850   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:41:54.713383   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:41:54.713402   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:41:54.713669   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:41:54.713856   16242 main.go:141] libmachine: (addons-320753) Calling .GetMachineName
	I1105 17:41:54.713963   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:41:54.714103   16242 start.go:159] libmachine.API.Create for "addons-320753" (driver="kvm2")
	I1105 17:41:54.714132   16242 client.go:168] LocalClient.Create starting
	I1105 17:41:54.714164   16242 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 17:41:55.005541   16242 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 17:41:55.130181   16242 main.go:141] libmachine: Running pre-create checks...
	I1105 17:41:55.130206   16242 main.go:141] libmachine: (addons-320753) Calling .PreCreateCheck
	I1105 17:41:55.130699   16242 main.go:141] libmachine: (addons-320753) Calling .GetConfigRaw
	I1105 17:41:55.131063   16242 main.go:141] libmachine: Creating machine...
	I1105 17:41:55.131074   16242 main.go:141] libmachine: (addons-320753) Calling .Create
	I1105 17:41:55.131202   16242 main.go:141] libmachine: (addons-320753) Creating KVM machine...
	I1105 17:41:55.132407   16242 main.go:141] libmachine: (addons-320753) DBG | found existing default KVM network
	I1105 17:41:55.133134   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.132998   16264 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I1105 17:41:55.133149   16242 main.go:141] libmachine: (addons-320753) DBG | created network xml: 
	I1105 17:41:55.133162   16242 main.go:141] libmachine: (addons-320753) DBG | <network>
	I1105 17:41:55.133172   16242 main.go:141] libmachine: (addons-320753) DBG |   <name>mk-addons-320753</name>
	I1105 17:41:55.133185   16242 main.go:141] libmachine: (addons-320753) DBG |   <dns enable='no'/>
	I1105 17:41:55.133193   16242 main.go:141] libmachine: (addons-320753) DBG |   
	I1105 17:41:55.133205   16242 main.go:141] libmachine: (addons-320753) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1105 17:41:55.133217   16242 main.go:141] libmachine: (addons-320753) DBG |     <dhcp>
	I1105 17:41:55.133231   16242 main.go:141] libmachine: (addons-320753) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1105 17:41:55.133248   16242 main.go:141] libmachine: (addons-320753) DBG |     </dhcp>
	I1105 17:41:55.133261   16242 main.go:141] libmachine: (addons-320753) DBG |   </ip>
	I1105 17:41:55.133275   16242 main.go:141] libmachine: (addons-320753) DBG |   
	I1105 17:41:55.133287   16242 main.go:141] libmachine: (addons-320753) DBG | </network>
	I1105 17:41:55.133297   16242 main.go:141] libmachine: (addons-320753) DBG | 
	I1105 17:41:55.138540   16242 main.go:141] libmachine: (addons-320753) DBG | trying to create private KVM network mk-addons-320753 192.168.39.0/24...
	I1105 17:41:55.199087   16242 main.go:141] libmachine: (addons-320753) DBG | private KVM network mk-addons-320753 192.168.39.0/24 created
	I1105 17:41:55.199118   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.199066   16264 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 17:41:55.199151   16242 main.go:141] libmachine: (addons-320753) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753 ...
	I1105 17:41:55.199173   16242 main.go:141] libmachine: (addons-320753) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 17:41:55.199232   16242 main.go:141] libmachine: (addons-320753) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 17:41:55.475013   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.474849   16264 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa...
	I1105 17:41:55.517210   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.517077   16264 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/addons-320753.rawdisk...
	I1105 17:41:55.517241   16242 main.go:141] libmachine: (addons-320753) DBG | Writing magic tar header
	I1105 17:41:55.517254   16242 main.go:141] libmachine: (addons-320753) DBG | Writing SSH key tar header
	I1105 17:41:55.517270   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:55.517201   16264 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753 ...
	I1105 17:41:55.517305   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753
	I1105 17:41:55.517358   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753 (perms=drwx------)
	I1105 17:41:55.517387   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 17:41:55.517400   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 17:41:55.517418   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 17:41:55.517430   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 17:41:55.517442   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 17:41:55.517483   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 17:41:55.517511   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 17:41:55.517521   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 17:41:55.517531   16242 main.go:141] libmachine: (addons-320753) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 17:41:55.517539   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home/jenkins
	I1105 17:41:55.517544   16242 main.go:141] libmachine: (addons-320753) Creating domain...
	I1105 17:41:55.517557   16242 main.go:141] libmachine: (addons-320753) DBG | Checking permissions on dir: /home
	I1105 17:41:55.517581   16242 main.go:141] libmachine: (addons-320753) DBG | Skipping /home - not owner
	I1105 17:41:55.518489   16242 main.go:141] libmachine: (addons-320753) define libvirt domain using xml: 
	I1105 17:41:55.518514   16242 main.go:141] libmachine: (addons-320753) <domain type='kvm'>
	I1105 17:41:55.518522   16242 main.go:141] libmachine: (addons-320753)   <name>addons-320753</name>
	I1105 17:41:55.518534   16242 main.go:141] libmachine: (addons-320753)   <memory unit='MiB'>4000</memory>
	I1105 17:41:55.518543   16242 main.go:141] libmachine: (addons-320753)   <vcpu>2</vcpu>
	I1105 17:41:55.518550   16242 main.go:141] libmachine: (addons-320753)   <features>
	I1105 17:41:55.518560   16242 main.go:141] libmachine: (addons-320753)     <acpi/>
	I1105 17:41:55.518569   16242 main.go:141] libmachine: (addons-320753)     <apic/>
	I1105 17:41:55.518576   16242 main.go:141] libmachine: (addons-320753)     <pae/>
	I1105 17:41:55.518583   16242 main.go:141] libmachine: (addons-320753)     
	I1105 17:41:55.518593   16242 main.go:141] libmachine: (addons-320753)   </features>
	I1105 17:41:55.518604   16242 main.go:141] libmachine: (addons-320753)   <cpu mode='host-passthrough'>
	I1105 17:41:55.518618   16242 main.go:141] libmachine: (addons-320753)   
	I1105 17:41:55.518634   16242 main.go:141] libmachine: (addons-320753)   </cpu>
	I1105 17:41:55.518639   16242 main.go:141] libmachine: (addons-320753)   <os>
	I1105 17:41:55.518647   16242 main.go:141] libmachine: (addons-320753)     <type>hvm</type>
	I1105 17:41:55.518652   16242 main.go:141] libmachine: (addons-320753)     <boot dev='cdrom'/>
	I1105 17:41:55.518658   16242 main.go:141] libmachine: (addons-320753)     <boot dev='hd'/>
	I1105 17:41:55.518663   16242 main.go:141] libmachine: (addons-320753)     <bootmenu enable='no'/>
	I1105 17:41:55.518669   16242 main.go:141] libmachine: (addons-320753)   </os>
	I1105 17:41:55.518675   16242 main.go:141] libmachine: (addons-320753)   <devices>
	I1105 17:41:55.518680   16242 main.go:141] libmachine: (addons-320753)     <disk type='file' device='cdrom'>
	I1105 17:41:55.518693   16242 main.go:141] libmachine: (addons-320753)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/boot2docker.iso'/>
	I1105 17:41:55.518707   16242 main.go:141] libmachine: (addons-320753)       <target dev='hdc' bus='scsi'/>
	I1105 17:41:55.518712   16242 main.go:141] libmachine: (addons-320753)       <readonly/>
	I1105 17:41:55.518716   16242 main.go:141] libmachine: (addons-320753)     </disk>
	I1105 17:41:55.518721   16242 main.go:141] libmachine: (addons-320753)     <disk type='file' device='disk'>
	I1105 17:41:55.518729   16242 main.go:141] libmachine: (addons-320753)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 17:41:55.518737   16242 main.go:141] libmachine: (addons-320753)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/addons-320753.rawdisk'/>
	I1105 17:41:55.518743   16242 main.go:141] libmachine: (addons-320753)       <target dev='hda' bus='virtio'/>
	I1105 17:41:55.518748   16242 main.go:141] libmachine: (addons-320753)     </disk>
	I1105 17:41:55.518755   16242 main.go:141] libmachine: (addons-320753)     <interface type='network'>
	I1105 17:41:55.518761   16242 main.go:141] libmachine: (addons-320753)       <source network='mk-addons-320753'/>
	I1105 17:41:55.518767   16242 main.go:141] libmachine: (addons-320753)       <model type='virtio'/>
	I1105 17:41:55.518772   16242 main.go:141] libmachine: (addons-320753)     </interface>
	I1105 17:41:55.518782   16242 main.go:141] libmachine: (addons-320753)     <interface type='network'>
	I1105 17:41:55.518789   16242 main.go:141] libmachine: (addons-320753)       <source network='default'/>
	I1105 17:41:55.518803   16242 main.go:141] libmachine: (addons-320753)       <model type='virtio'/>
	I1105 17:41:55.518811   16242 main.go:141] libmachine: (addons-320753)     </interface>
	I1105 17:41:55.518815   16242 main.go:141] libmachine: (addons-320753)     <serial type='pty'>
	I1105 17:41:55.518821   16242 main.go:141] libmachine: (addons-320753)       <target port='0'/>
	I1105 17:41:55.518825   16242 main.go:141] libmachine: (addons-320753)     </serial>
	I1105 17:41:55.518831   16242 main.go:141] libmachine: (addons-320753)     <console type='pty'>
	I1105 17:41:55.518838   16242 main.go:141] libmachine: (addons-320753)       <target type='serial' port='0'/>
	I1105 17:41:55.518845   16242 main.go:141] libmachine: (addons-320753)     </console>
	I1105 17:41:55.518849   16242 main.go:141] libmachine: (addons-320753)     <rng model='virtio'>
	I1105 17:41:55.518856   16242 main.go:141] libmachine: (addons-320753)       <backend model='random'>/dev/random</backend>
	I1105 17:41:55.518859   16242 main.go:141] libmachine: (addons-320753)     </rng>
	I1105 17:41:55.518864   16242 main.go:141] libmachine: (addons-320753)     
	I1105 17:41:55.518870   16242 main.go:141] libmachine: (addons-320753)     
	I1105 17:41:55.518874   16242 main.go:141] libmachine: (addons-320753)   </devices>
	I1105 17:41:55.518879   16242 main.go:141] libmachine: (addons-320753) </domain>
	I1105 17:41:55.518885   16242 main.go:141] libmachine: (addons-320753) 
	I1105 17:41:55.525389   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:3f:fa:0b in network default
	I1105 17:41:55.525873   16242 main.go:141] libmachine: (addons-320753) Ensuring networks are active...
	I1105 17:41:55.525897   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:55.526434   16242 main.go:141] libmachine: (addons-320753) Ensuring network default is active
	I1105 17:41:55.526696   16242 main.go:141] libmachine: (addons-320753) Ensuring network mk-addons-320753 is active
	I1105 17:41:55.528065   16242 main.go:141] libmachine: (addons-320753) Getting domain xml...
	I1105 17:41:55.528663   16242 main.go:141] libmachine: (addons-320753) Creating domain...
	I1105 17:41:56.922125   16242 main.go:141] libmachine: (addons-320753) Waiting to get IP...
	I1105 17:41:56.922874   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:56.923272   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:56.923299   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:56.923249   16264 retry.go:31] will retry after 268.68519ms: waiting for machine to come up
	I1105 17:41:57.193729   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:57.194218   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:57.194242   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:57.194161   16264 retry.go:31] will retry after 308.815288ms: waiting for machine to come up
	I1105 17:41:57.504533   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:57.505038   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:57.505061   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:57.504980   16264 retry.go:31] will retry after 340.827865ms: waiting for machine to come up
	I1105 17:41:57.847465   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:57.847965   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:57.847995   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:57.847928   16264 retry.go:31] will retry after 532.128569ms: waiting for machine to come up
	I1105 17:41:58.381449   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:58.381866   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:58.381894   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:58.381820   16264 retry.go:31] will retry after 550.436713ms: waiting for machine to come up
	I1105 17:41:58.933369   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:58.933706   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:58.933729   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:58.933662   16264 retry.go:31] will retry after 911.635128ms: waiting for machine to come up
	I1105 17:41:59.847254   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:41:59.847675   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:41:59.847703   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:41:59.847635   16264 retry.go:31] will retry after 971.876512ms: waiting for machine to come up
	I1105 17:42:00.821220   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:00.821644   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:00.821686   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:00.821610   16264 retry.go:31] will retry after 1.397416189s: waiting for machine to come up
	I1105 17:42:02.221022   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:02.221446   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:02.221473   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:02.221402   16264 retry.go:31] will retry after 1.160656426s: waiting for machine to come up
	I1105 17:42:03.383794   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:03.384209   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:03.384239   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:03.384165   16264 retry.go:31] will retry after 1.776821583s: waiting for machine to come up
	I1105 17:42:05.163003   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:05.163322   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:05.163348   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:05.163273   16264 retry.go:31] will retry after 2.125484758s: waiting for machine to come up
	I1105 17:42:07.290208   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:07.290579   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:07.290607   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:07.290526   16264 retry.go:31] will retry after 3.012964339s: waiting for machine to come up
	I1105 17:42:10.305078   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:10.305469   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:10.305490   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:10.305428   16264 retry.go:31] will retry after 2.81216672s: waiting for machine to come up
	I1105 17:42:13.121417   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:13.121817   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find current IP address of domain addons-320753 in network mk-addons-320753
	I1105 17:42:13.121841   16242 main.go:141] libmachine: (addons-320753) DBG | I1105 17:42:13.121780   16264 retry.go:31] will retry after 3.6760464s: waiting for machine to come up
	I1105 17:42:16.800415   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:16.800850   16242 main.go:141] libmachine: (addons-320753) Found IP for machine: 192.168.39.201
	I1105 17:42:16.800865   16242 main.go:141] libmachine: (addons-320753) Reserving static IP address...
	I1105 17:42:16.800874   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has current primary IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:16.801198   16242 main.go:141] libmachine: (addons-320753) DBG | unable to find host DHCP lease matching {name: "addons-320753", mac: "52:54:00:89:64:28", ip: "192.168.39.201"} in network mk-addons-320753
	I1105 17:42:16.869887   16242 main.go:141] libmachine: (addons-320753) Reserved static IP address: 192.168.39.201
	I1105 17:42:16.869917   16242 main.go:141] libmachine: (addons-320753) DBG | Getting to WaitForSSH function...
	I1105 17:42:16.869941   16242 main.go:141] libmachine: (addons-320753) Waiting for SSH to be available...
	I1105 17:42:16.872367   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:16.872792   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:minikube Clientid:01:52:54:00:89:64:28}
	I1105 17:42:16.872821   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:16.872944   16242 main.go:141] libmachine: (addons-320753) DBG | Using SSH client type: external
	I1105 17:42:16.872967   16242 main.go:141] libmachine: (addons-320753) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa (-rw-------)
	I1105 17:42:16.873004   16242 main.go:141] libmachine: (addons-320753) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 17:42:16.873017   16242 main.go:141] libmachine: (addons-320753) DBG | About to run SSH command:
	I1105 17:42:16.873033   16242 main.go:141] libmachine: (addons-320753) DBG | exit 0
	I1105 17:42:17.002832   16242 main.go:141] libmachine: (addons-320753) DBG | SSH cmd err, output: <nil>: 
	I1105 17:42:17.003118   16242 main.go:141] libmachine: (addons-320753) KVM machine creation complete!
	I1105 17:42:17.003480   16242 main.go:141] libmachine: (addons-320753) Calling .GetConfigRaw
	I1105 17:42:17.004047   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:17.004390   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:17.004548   16242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 17:42:17.004562   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:17.005768   16242 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 17:42:17.005780   16242 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 17:42:17.005785   16242 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 17:42:17.005790   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.007934   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.008250   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.008276   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.008431   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.008590   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.008728   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.008862   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.009009   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.009213   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.009224   16242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 17:42:17.106022   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 17:42:17.106057   16242 main.go:141] libmachine: Detecting the provisioner...
	I1105 17:42:17.106067   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.108626   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.108912   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.108939   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.109140   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.109320   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.109438   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.109572   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.109703   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.109879   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.109889   16242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 17:42:17.207375   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 17:42:17.207430   16242 main.go:141] libmachine: found compatible host: buildroot
	I1105 17:42:17.207437   16242 main.go:141] libmachine: Provisioning with buildroot...
	I1105 17:42:17.207443   16242 main.go:141] libmachine: (addons-320753) Calling .GetMachineName
	I1105 17:42:17.207693   16242 buildroot.go:166] provisioning hostname "addons-320753"
	I1105 17:42:17.207721   16242 main.go:141] libmachine: (addons-320753) Calling .GetMachineName
	I1105 17:42:17.207908   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.210765   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.211327   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.211350   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.211454   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.211610   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.211745   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.212016   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.212201   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.212375   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.212393   16242 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-320753 && echo "addons-320753" | sudo tee /etc/hostname
	I1105 17:42:17.324149   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-320753
	
	I1105 17:42:17.324173   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.326714   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.327038   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.327065   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.327255   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.327436   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.327592   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.327739   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.327911   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.328084   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.328105   16242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-320753' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-320753/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-320753' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 17:42:17.430829   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 17:42:17.430861   16242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 17:42:17.430888   16242 buildroot.go:174] setting up certificates
	I1105 17:42:17.430900   16242 provision.go:84] configureAuth start
	I1105 17:42:17.430912   16242 main.go:141] libmachine: (addons-320753) Calling .GetMachineName
	I1105 17:42:17.431223   16242 main.go:141] libmachine: (addons-320753) Calling .GetIP
	I1105 17:42:17.433608   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.433940   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.433975   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.434088   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.436116   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.436451   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.436478   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.436618   16242 provision.go:143] copyHostCerts
	I1105 17:42:17.436691   16242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 17:42:17.436797   16242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 17:42:17.436859   16242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 17:42:17.436905   16242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.addons-320753 san=[127.0.0.1 192.168.39.201 addons-320753 localhost minikube]
	I1105 17:42:17.700286   16242 provision.go:177] copyRemoteCerts
	I1105 17:42:17.700341   16242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 17:42:17.700362   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.702758   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.703091   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.703120   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.703277   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.703482   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.703622   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.703773   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:17.781314   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 17:42:17.804852   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 17:42:17.826905   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 17:42:17.848935   16242 provision.go:87] duration metric: took 418.021313ms to configureAuth
	I1105 17:42:17.848962   16242 buildroot.go:189] setting minikube options for container-runtime
	I1105 17:42:17.849136   16242 config.go:182] Loaded profile config "addons-320753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:42:17.849205   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:17.851739   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.852035   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:17.852067   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:17.852215   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:17.852397   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.852541   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:17.852680   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:17.852843   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:17.853035   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:17.853050   16242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 17:42:18.065191   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 17:42:18.065223   16242 main.go:141] libmachine: Checking connection to Docker...
	I1105 17:42:18.065232   16242 main.go:141] libmachine: (addons-320753) Calling .GetURL
	I1105 17:42:18.066446   16242 main.go:141] libmachine: (addons-320753) DBG | Using libvirt version 6000000
	I1105 17:42:18.068542   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.068879   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.068910   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.069042   16242 main.go:141] libmachine: Docker is up and running!
	I1105 17:42:18.069056   16242 main.go:141] libmachine: Reticulating splines...
	I1105 17:42:18.069064   16242 client.go:171] duration metric: took 23.354923216s to LocalClient.Create
	I1105 17:42:18.069093   16242 start.go:167] duration metric: took 23.354991027s to libmachine.API.Create "addons-320753"
	I1105 17:42:18.069113   16242 start.go:293] postStartSetup for "addons-320753" (driver="kvm2")
	I1105 17:42:18.069129   16242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 17:42:18.069151   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.069367   16242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 17:42:18.069387   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:18.071473   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.071758   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.071784   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.071919   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:18.072099   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.072240   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:18.072348   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:18.148938   16242 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 17:42:18.152915   16242 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 17:42:18.152937   16242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 17:42:18.153016   16242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 17:42:18.153053   16242 start.go:296] duration metric: took 83.92468ms for postStartSetup
	I1105 17:42:18.153092   16242 main.go:141] libmachine: (addons-320753) Calling .GetConfigRaw
	I1105 17:42:18.153699   16242 main.go:141] libmachine: (addons-320753) Calling .GetIP
	I1105 17:42:18.156143   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.156456   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.156486   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.156698   16242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/config.json ...
	I1105 17:42:18.156871   16242 start.go:128] duration metric: took 23.459639016s to createHost
	I1105 17:42:18.156892   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:18.159843   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.160233   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.160268   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.160413   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:18.160579   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.160731   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.160839   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:18.161005   16242 main.go:141] libmachine: Using SSH client type: native
	I1105 17:42:18.161205   16242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1105 17:42:18.161216   16242 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 17:42:18.259567   16242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730828538.234711760
	
	I1105 17:42:18.259590   16242 fix.go:216] guest clock: 1730828538.234711760
	I1105 17:42:18.259598   16242 fix.go:229] Guest: 2024-11-05 17:42:18.23471176 +0000 UTC Remote: 2024-11-05 17:42:18.156883465 +0000 UTC m=+23.562279478 (delta=77.828295ms)
	I1105 17:42:18.259625   16242 fix.go:200] guest clock delta is within tolerance: 77.828295ms
	I1105 17:42:18.259656   16242 start.go:83] releasing machines lock for "addons-320753", held for 23.562482615s
	I1105 17:42:18.259682   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.259949   16242 main.go:141] libmachine: (addons-320753) Calling .GetIP
	I1105 17:42:18.262615   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.262939   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.262959   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.263113   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.263487   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.263634   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:18.263740   16242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 17:42:18.263784   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:18.263803   16242 ssh_runner.go:195] Run: cat /version.json
	I1105 17:42:18.263824   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:18.266380   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.266635   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.266661   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.266700   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.266797   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:18.266980   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.267121   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:18.267219   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:18.267238   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:18.267247   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:18.267394   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:18.267540   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:18.267696   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:18.267819   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:18.339556   16242 ssh_runner.go:195] Run: systemctl --version
	I1105 17:42:18.371653   16242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 17:42:18.530168   16242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 17:42:18.535544   16242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 17:42:18.535606   16242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 17:42:18.550854   16242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 17:42:18.550886   16242 start.go:495] detecting cgroup driver to use...
	I1105 17:42:18.550956   16242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 17:42:18.566002   16242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 17:42:18.579665   16242 docker.go:217] disabling cri-docker service (if available) ...
	I1105 17:42:18.579724   16242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 17:42:18.593161   16242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 17:42:18.606631   16242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 17:42:18.723216   16242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 17:42:18.857282   16242 docker.go:233] disabling docker service ...
	I1105 17:42:18.857340   16242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 17:42:18.871102   16242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 17:42:18.883893   16242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 17:42:19.013709   16242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 17:42:19.121073   16242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 17:42:19.133973   16242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 17:42:19.151047   16242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 17:42:19.151112   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.160667   16242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 17:42:19.160731   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.170353   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.179989   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.189698   16242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 17:42:19.199960   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.209832   16242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.225928   16242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:42:19.235245   16242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 17:42:19.243722   16242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 17:42:19.243770   16242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 17:42:19.256215   16242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 17:42:19.265500   16242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:42:19.373760   16242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 17:42:19.460159   16242 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 17:42:19.460261   16242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 17:42:19.464571   16242 start.go:563] Will wait 60s for crictl version
	I1105 17:42:19.464641   16242 ssh_runner.go:195] Run: which crictl
	I1105 17:42:19.468045   16242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 17:42:19.509755   16242 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 17:42:19.509865   16242 ssh_runner.go:195] Run: crio --version
	I1105 17:42:19.537428   16242 ssh_runner.go:195] Run: crio --version
	I1105 17:42:19.565435   16242 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 17:42:19.566881   16242 main.go:141] libmachine: (addons-320753) Calling .GetIP
	I1105 17:42:19.569222   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:19.569500   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:19.569522   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:19.569713   16242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 17:42:19.573490   16242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:42:19.585478   16242 kubeadm.go:883] updating cluster {Name:addons-320753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 17:42:19.585603   16242 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:42:19.585646   16242 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:42:19.615782   16242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 17:42:19.615868   16242 ssh_runner.go:195] Run: which lz4
	I1105 17:42:19.619479   16242 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 17:42:19.623333   16242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 17:42:19.623363   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 17:42:20.741435   16242 crio.go:462] duration metric: took 1.121995054s to copy over tarball
	I1105 17:42:20.741499   16242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 17:42:22.849751   16242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108227s)
	I1105 17:42:22.849776   16242 crio.go:469] duration metric: took 2.108317016s to extract the tarball
	I1105 17:42:22.849783   16242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 17:42:22.886121   16242 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:42:22.925831   16242 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 17:42:22.925853   16242 cache_images.go:84] Images are preloaded, skipping loading
	I1105 17:42:22.925863   16242 kubeadm.go:934] updating node { 192.168.39.201 8443 v1.31.2 crio true true} ...
	I1105 17:42:22.926008   16242 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-320753 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 17:42:22.926092   16242 ssh_runner.go:195] Run: crio config
	I1105 17:42:22.970242   16242 cni.go:84] Creating CNI manager for ""
	I1105 17:42:22.970265   16242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 17:42:22.970276   16242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 17:42:22.970304   16242 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-320753 NodeName:addons-320753 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 17:42:22.970451   16242 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-320753"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.201"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 17:42:22.970519   16242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 17:42:22.979761   16242 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 17:42:22.979834   16242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 17:42:22.988460   16242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1105 17:42:23.004119   16242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 17:42:23.019649   16242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1105 17:42:23.035330   16242 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I1105 17:42:23.039201   16242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:42:23.050811   16242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:42:23.172474   16242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:42:23.188403   16242 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753 for IP: 192.168.39.201
	I1105 17:42:23.188425   16242 certs.go:194] generating shared ca certs ...
	I1105 17:42:23.188441   16242 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.188597   16242 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 17:42:23.341446   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt ...
	I1105 17:42:23.341474   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt: {Name:mkfa59703d59064c76459a190023e74d43463f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.341641   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key ...
	I1105 17:42:23.341651   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key: {Name:mk320346499bac546f45eab013d96c660693896c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.341727   16242 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 17:42:23.401273   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt ...
	I1105 17:42:23.401303   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt: {Name:mkeb19c5ec2a163cabde3019131e5181eee0cebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.401474   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key ...
	I1105 17:42:23.401485   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key: {Name:mkd319678fd41709a3afcd63022818e4ae49d586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.401562   16242 certs.go:256] generating profile certs ...
	I1105 17:42:23.401629   16242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.key
	I1105 17:42:23.401644   16242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt with IP's: []
	I1105 17:42:23.517708   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt ...
	I1105 17:42:23.517739   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: {Name:mk8d52f2bc368e6ca0bc29f008e577c6fe6ecf37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.517908   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.key ...
	I1105 17:42:23.517919   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.key: {Name:mk92c264006ac762b887e9eb89473082abebe2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.517989   16242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key.336631c6
	I1105 17:42:23.518008   16242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt.336631c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.201]
	I1105 17:42:23.667156   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt.336631c6 ...
	I1105 17:42:23.667192   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt.336631c6: {Name:mk7ebc684cac944e8b0f2b7b96848a9ee121ece3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.667377   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key.336631c6 ...
	I1105 17:42:23.667400   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key.336631c6: {Name:mk38d47f5b2c3b19a5d51c257838cd81b7f02bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.667496   16242 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt.336631c6 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt
	I1105 17:42:23.667590   16242 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key.336631c6 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key
	I1105 17:42:23.667656   16242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.key
	I1105 17:42:23.667683   16242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.crt with IP's: []
	I1105 17:42:23.763579   16242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.crt ...
	I1105 17:42:23.763607   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.crt: {Name:mk7dbb29d9695fc682c94fc54b468c1a836fe393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.763778   16242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.key ...
	I1105 17:42:23.763795   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.key: {Name:mkfa196ba85129499b12cdf30348ef0008a6cc9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:23.764001   16242 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 17:42:23.764036   16242 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 17:42:23.764057   16242 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 17:42:23.764081   16242 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 17:42:23.764649   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 17:42:23.788539   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 17:42:23.811287   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 17:42:23.833802   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 17:42:23.859222   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1105 17:42:23.889204   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 17:42:23.917639   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 17:42:23.939672   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 17:42:23.961687   16242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 17:42:23.983158   16242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 17:42:23.997758   16242 ssh_runner.go:195] Run: openssl version
	I1105 17:42:24.003219   16242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 17:42:24.013088   16242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:42:24.017100   16242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:42:24.017156   16242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:42:24.022521   16242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 17:42:24.032715   16242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 17:42:24.036549   16242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 17:42:24.036604   16242 kubeadm.go:392] StartCluster: {Name:addons-320753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-320753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:42:24.036686   16242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 17:42:24.036733   16242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 17:42:24.078130   16242 cri.go:89] found id: ""
	I1105 17:42:24.078210   16242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 17:42:24.087544   16242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 17:42:24.096356   16242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 17:42:24.105272   16242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 17:42:24.105292   16242 kubeadm.go:157] found existing configuration files:
	
	I1105 17:42:24.105331   16242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 17:42:24.113816   16242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 17:42:24.113891   16242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 17:42:24.122684   16242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 17:42:24.130880   16242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 17:42:24.130936   16242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 17:42:24.139808   16242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 17:42:24.148030   16242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 17:42:24.148077   16242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 17:42:24.156797   16242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 17:42:24.164925   16242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 17:42:24.164980   16242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 17:42:24.173486   16242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 17:42:24.312180   16242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 17:42:34.128892   16242 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 17:42:34.128970   16242 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 17:42:34.129061   16242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 17:42:34.129166   16242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 17:42:34.129262   16242 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 17:42:34.129379   16242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 17:42:34.131155   16242 out.go:235]   - Generating certificates and keys ...
	I1105 17:42:34.131249   16242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 17:42:34.131316   16242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 17:42:34.131418   16242 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 17:42:34.131497   16242 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 17:42:34.131582   16242 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 17:42:34.131676   16242 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 17:42:34.131775   16242 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 17:42:34.131945   16242 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-320753 localhost] and IPs [192.168.39.201 127.0.0.1 ::1]
	I1105 17:42:34.132025   16242 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 17:42:34.132172   16242 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-320753 localhost] and IPs [192.168.39.201 127.0.0.1 ::1]
	I1105 17:42:34.132268   16242 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 17:42:34.132365   16242 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 17:42:34.132430   16242 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 17:42:34.132506   16242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 17:42:34.132584   16242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 17:42:34.132683   16242 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 17:42:34.132766   16242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 17:42:34.132863   16242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 17:42:34.132931   16242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 17:42:34.133033   16242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 17:42:34.133127   16242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 17:42:34.134542   16242 out.go:235]   - Booting up control plane ...
	I1105 17:42:34.134648   16242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 17:42:34.134744   16242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 17:42:34.134847   16242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 17:42:34.135028   16242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 17:42:34.135146   16242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 17:42:34.135213   16242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 17:42:34.135348   16242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 17:42:34.135467   16242 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 17:42:34.135533   16242 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001156539s
	I1105 17:42:34.135624   16242 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 17:42:34.135704   16242 kubeadm.go:310] [api-check] The API server is healthy after 4.502039687s
	I1105 17:42:34.135796   16242 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 17:42:34.135923   16242 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 17:42:34.135976   16242 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 17:42:34.136122   16242 kubeadm.go:310] [mark-control-plane] Marking the node addons-320753 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 17:42:34.136175   16242 kubeadm.go:310] [bootstrap-token] Using token: s3vdam.ma6k0x78nxs5a20n
	I1105 17:42:34.137518   16242 out.go:235]   - Configuring RBAC rules ...
	I1105 17:42:34.137616   16242 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 17:42:34.137696   16242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 17:42:34.137849   16242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 17:42:34.138014   16242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 17:42:34.138116   16242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 17:42:34.138184   16242 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 17:42:34.138289   16242 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 17:42:34.138330   16242 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 17:42:34.138366   16242 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 17:42:34.138376   16242 kubeadm.go:310] 
	I1105 17:42:34.138433   16242 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 17:42:34.138438   16242 kubeadm.go:310] 
	I1105 17:42:34.138504   16242 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 17:42:34.138510   16242 kubeadm.go:310] 
	I1105 17:42:34.138532   16242 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 17:42:34.138630   16242 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 17:42:34.138714   16242 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 17:42:34.138725   16242 kubeadm.go:310] 
	I1105 17:42:34.138774   16242 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 17:42:34.138780   16242 kubeadm.go:310] 
	I1105 17:42:34.138837   16242 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 17:42:34.138846   16242 kubeadm.go:310] 
	I1105 17:42:34.138899   16242 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 17:42:34.138985   16242 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 17:42:34.139094   16242 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 17:42:34.139106   16242 kubeadm.go:310] 
	I1105 17:42:34.139229   16242 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 17:42:34.139343   16242 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 17:42:34.139354   16242 kubeadm.go:310] 
	I1105 17:42:34.139451   16242 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s3vdam.ma6k0x78nxs5a20n \
	I1105 17:42:34.139534   16242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 17:42:34.139552   16242 kubeadm.go:310] 	--control-plane 
	I1105 17:42:34.139558   16242 kubeadm.go:310] 
	I1105 17:42:34.139622   16242 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 17:42:34.139633   16242 kubeadm.go:310] 
	I1105 17:42:34.139716   16242 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s3vdam.ma6k0x78nxs5a20n \
	I1105 17:42:34.139836   16242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 17:42:34.139850   16242 cni.go:84] Creating CNI manager for ""
	I1105 17:42:34.139859   16242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 17:42:34.141306   16242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 17:42:34.142311   16242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 17:42:34.154653   16242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 17:42:34.176526   16242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 17:42:34.176597   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:34.176638   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-320753 minikube.k8s.io/updated_at=2024_11_05T17_42_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=addons-320753 minikube.k8s.io/primary=true
	I1105 17:42:34.321683   16242 ops.go:34] apiserver oom_adj: -16
	I1105 17:42:34.331120   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:34.831983   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:35.331325   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:35.832229   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:36.331211   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:36.832163   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:37.332233   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:37.831944   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:38.332194   16242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:42:38.423654   16242 kubeadm.go:1113] duration metric: took 4.2471119s to wait for elevateKubeSystemPrivileges
	I1105 17:42:38.423685   16242 kubeadm.go:394] duration metric: took 14.387085511s to StartCluster
	I1105 17:42:38.423701   16242 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:38.423831   16242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 17:42:38.424237   16242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:42:38.424439   16242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 17:42:38.424455   16242 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:42:38.424512   16242 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1105 17:42:38.424616   16242 addons.go:69] Setting yakd=true in profile "addons-320753"
	I1105 17:42:38.424631   16242 addons.go:69] Setting inspektor-gadget=true in profile "addons-320753"
	I1105 17:42:38.424642   16242 addons.go:69] Setting metrics-server=true in profile "addons-320753"
	I1105 17:42:38.424653   16242 addons.go:234] Setting addon metrics-server=true in "addons-320753"
	I1105 17:42:38.424647   16242 addons.go:69] Setting default-storageclass=true in profile "addons-320753"
	I1105 17:42:38.424658   16242 addons.go:234] Setting addon inspektor-gadget=true in "addons-320753"
	I1105 17:42:38.424669   16242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-320753"
	I1105 17:42:38.424674   16242 config.go:182] Loaded profile config "addons-320753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:42:38.424681   16242 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-320753"
	I1105 17:42:38.424701   16242 addons.go:69] Setting registry=true in profile "addons-320753"
	I1105 17:42:38.424709   16242 addons.go:69] Setting gcp-auth=true in profile "addons-320753"
	I1105 17:42:38.424714   16242 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-320753"
	I1105 17:42:38.424717   16242 addons.go:234] Setting addon registry=true in "addons-320753"
	I1105 17:42:38.424721   16242 addons.go:69] Setting ingress=true in profile "addons-320753"
	I1105 17:42:38.424732   16242 addons.go:69] Setting ingress-dns=true in profile "addons-320753"
	I1105 17:42:38.424743   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424682   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424750   16242 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-320753"
	I1105 17:42:38.424751   16242 addons.go:69] Setting cloud-spanner=true in profile "addons-320753"
	I1105 17:42:38.424766   16242 addons.go:234] Setting addon cloud-spanner=true in "addons-320753"
	I1105 17:42:38.424778   16242 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-320753"
	I1105 17:42:38.424798   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424800   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424691   16242 addons.go:69] Setting volcano=true in profile "addons-320753"
	I1105 17:42:38.425151   16242 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-320753"
	I1105 17:42:38.425164   16242 addons.go:234] Setting addon volcano=true in "addons-320753"
	I1105 17:42:38.425166   16242 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-320753"
	I1105 17:42:38.425169   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425178   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425187   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424696   16242 addons.go:69] Setting volumesnapshots=true in profile "addons-320753"
	I1105 17:42:38.425194   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425203   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.425220   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.424744   16242 addons.go:234] Setting addon ingress=true in "addons-320753"
	I1105 17:42:38.425328   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424703   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.425500   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425524   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.425705   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425735   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.425835   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.424662   16242 addons.go:69] Setting storage-provisioner=true in profile "addons-320753"
	I1105 17:42:38.425860   16242 addons.go:234] Setting addon storage-provisioner=true in "addons-320753"
	I1105 17:42:38.425880   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.425200   16242 addons.go:234] Setting addon volumesnapshots=true in "addons-320753"
	I1105 17:42:38.426020   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.425891   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.426194   16242 out.go:177] * Verifying Kubernetes components...
	I1105 17:42:38.425179   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.424637   16242 addons.go:234] Setting addon yakd=true in "addons-320753"
	I1105 17:42:38.426324   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.426350   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.424727   16242 mustload.go:65] Loading cluster: addons-320753
	I1105 17:42:38.426387   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.426408   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.424744   16242 addons.go:234] Setting addon ingress-dns=true in "addons-320753"
	I1105 17:42:38.424687   16242 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-320753"
	I1105 17:42:38.424747   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.425143   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.425185   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.425201   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.426508   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.426572   16242 config.go:182] Loaded profile config "addons-320753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:42:38.426597   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.426640   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.426849   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.426904   16242 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-320753"
	I1105 17:42:38.427020   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.427043   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.427116   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.427149   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.427289   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.427328   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.440872   16242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:42:38.446675   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I1105 17:42:38.448137   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I1105 17:42:38.449145   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I1105 17:42:38.451314   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.451346   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.451686   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.451708   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.451717   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.451739   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.452559   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I1105 17:42:38.452688   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.453172   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.453192   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.453259   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.453333   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.453813   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.453836   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.453956   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.453980   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.454034   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.454083   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.454493   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.454508   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.454552   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.454596   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.455020   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.455051   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.455197   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.455533   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.455559   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.463059   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I1105 17:42:38.471431   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.472559   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.472646   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.473019   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.473271   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1105 17:42:38.473625   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.473707   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.473778   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.474298   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.474319   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.474574   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.474666   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.475396   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I1105 17:42:38.475875   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.476412   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.476428   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.476766   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.476943   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.477500   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33659
	I1105 17:42:38.478921   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.479880   16242 addons.go:234] Setting addon default-storageclass=true in "addons-320753"
	I1105 17:42:38.479924   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.480269   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.480333   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.480606   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I1105 17:42:38.480934   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1105 17:42:38.481868   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33433
	I1105 17:42:38.482221   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1105 17:42:38.482240   16242 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1105 17:42:38.482258   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.483070   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.483528   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.483620   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.483639   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.484388   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.484990   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.485026   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.485130   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.485158   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.485546   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.486073   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.486109   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.486302   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.486377   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.486399   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.486658   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.486822   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.486928   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.487075   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.488308   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I1105 17:42:38.488828   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.489301   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.489329   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.489742   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.490297   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.490338   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.492799   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I1105 17:42:38.493245   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.493727   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.493746   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.494115   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.494270   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.495932   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.496296   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.496324   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.503428   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.503490   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.503696   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.503754   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.504366   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.505177   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.505206   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.505828   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.506448   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.506484   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.510920   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I1105 17:42:38.511547   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.512029   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.512058   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.512454   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.512637   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.513680   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46565
	I1105 17:42:38.514107   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.514449   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.514652   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.514675   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.514692   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I1105 17:42:38.515129   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.515129   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.515595   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.515620   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.515690   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.515738   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.515992   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.516634   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.516687   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.517160   16242 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1105 17:42:38.517161   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I1105 17:42:38.518234   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.518412   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1105 17:42:38.518430   16242 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1105 17:42:38.518451   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.518939   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.518958   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.519384   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.519953   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.519987   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.521383   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I1105 17:42:38.521932   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.522181   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.522341   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.522368   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.522618   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.522763   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.522862   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.522957   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.523498   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.523515   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.523865   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.524340   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.524380   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.526527   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I1105 17:42:38.526830   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.528004   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.528023   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.528330   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.528681   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.530164   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.532173   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45911
	I1105 17:42:38.532562   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.532900   16242 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1105 17:42:38.533405   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.533425   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.533826   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.534015   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.534180   16242 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1105 17:42:38.534196   16242 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1105 17:42:38.534215   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.538309   16242 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-320753"
	I1105 17:42:38.538355   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:38.538746   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.538783   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.539017   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.539077   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.539099   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.539116   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.539306   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.539475   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.539613   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.545834   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I1105 17:42:38.546277   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.546808   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.546836   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.547199   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.547379   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.547439   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I1105 17:42:38.547930   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.549002   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.549018   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.549254   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.549737   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I1105 17:42:38.550155   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.550410   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.551317   16242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1105 17:42:38.552290   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.552571   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:38.552592   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:38.552678   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.552897   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:38.552921   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:38.552935   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:38.552952   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:38.552964   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:38.553267   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:38.553279   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	W1105 17:42:38.553363   16242 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1105 17:42:38.553605   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.553629   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.553958   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.554832   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.554861   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.556003   16242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:42:38.557402   16242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:42:38.559246   16242 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:42:38.559273   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1105 17:42:38.559296   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.561156   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I1105 17:42:38.561939   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.563146   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.563375   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.563395   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.563855   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.563883   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.564036   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.564130   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.564178   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.564227   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.564564   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.564707   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.566291   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.567324   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I1105 17:42:38.567879   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.568546   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.568574   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.568756   16242 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1105 17:42:38.568979   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.569208   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I1105 17:42:38.569585   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:38.569609   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.569619   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:38.570042   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.570062   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.570412   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.570543   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.570790   16242 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:42:38.570807   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1105 17:42:38.570824   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.571935   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46725
	I1105 17:42:38.572403   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.572696   16242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 17:42:38.572712   16242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 17:42:38.572729   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.573675   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I1105 17:42:38.573966   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I1105 17:42:38.574117   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.574202   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.574569   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.574586   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.574624   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.574697   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.574712   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.575001   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.575272   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.575290   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.575346   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.575389   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.575568   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.575586   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.575588   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.575727   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.575946   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.576180   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.576266   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.576432   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.576612   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.577050   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.577357   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.577633   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.577649   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.577788   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.578375   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.578560   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.578612   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.579006   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.579735   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.579926   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.580155   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I1105 17:42:38.580492   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.580810   16242 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1105 17:42:38.580911   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.581316   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.580936   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I1105 17:42:38.581674   16242 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1105 17:42:38.581719   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.581720   16242 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1105 17:42:38.582009   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.582563   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.582579   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.582631   16242 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:42:38.582644   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1105 17:42:38.582663   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.582806   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.583130   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.583328   16242 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:42:38.583343   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1105 17:42:38.583366   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.583460   16242 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 17:42:38.583469   16242 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 17:42:38.583482   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.583628   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I1105 17:42:38.583932   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.584273   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.584405   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.584421   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.584777   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.584973   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.585408   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I1105 17:42:38.585785   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.586070   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.586202   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.586215   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.586554   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.586780   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.587070   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.587893   16242 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1105 17:42:38.587985   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.588900   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.588919   16242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 17:42:38.589266   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.589579   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.589584   16242 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1105 17:42:38.589910   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1105 17:42:38.589927   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.589608   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.589972   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.589973   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.589777   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.589992   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.590109   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.590156   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.590255   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.590282   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.590333   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1105 17:42:38.590407   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.590428   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.590727   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.591835   16242 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1105 17:42:38.591896   16242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:42:38.592327   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 17:42:38.592349   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.592976   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.592995   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.593171   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.593328   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.593347   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.593874   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.593994   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.594283   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.594356   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.594418   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.594612   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1105 17:42:38.594646   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.595032   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.595334   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.595425   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.595455   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.595522   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.595608   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.595632   16242 out.go:177]   - Using image docker.io/registry:2.8.3
	I1105 17:42:38.595661   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.596150   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I1105 17:42:38.595792   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.596382   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.596496   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.596515   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.596972   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.596992   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.597345   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.597361   16242 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1105 17:42:38.597371   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1105 17:42:38.597384   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.597529   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.598666   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1105 17:42:38.600134   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1105 17:42:38.600178   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.600489   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.600507   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.600657   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.600783   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.600883   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.600971   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	W1105 17:42:38.601647   16242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35742->192.168.39.201:22: read: connection reset by peer
	I1105 17:42:38.601672   16242 retry.go:31] will retry after 274.928015ms: ssh: handshake failed: read tcp 192.168.39.1:35742->192.168.39.201:22: read: connection reset by peer
	I1105 17:42:38.602720   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1105 17:42:38.603924   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1105 17:42:38.605233   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1105 17:42:38.606445   16242 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1105 17:42:38.607585   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1105 17:42:38.607602   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1105 17:42:38.607622   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.610042   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36983
	I1105 17:42:38.610462   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:38.610485   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.610879   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.610904   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.611063   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:38.611081   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.611084   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:38.611255   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.611375   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.611422   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:38.611493   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:38.611798   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:38.613509   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:38.615147   16242 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1105 17:42:38.616371   16242 out.go:177]   - Using image docker.io/busybox:stable
	I1105 17:42:38.617748   16242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:42:38.617766   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1105 17:42:38.617781   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:38.620670   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.621034   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:38.621063   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:38.621179   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:38.621344   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:38.621461   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:38.621586   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	W1105 17:42:38.623064   16242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35752->192.168.39.201:22: read: connection reset by peer
	I1105 17:42:38.623087   16242 retry.go:31] will retry after 313.150416ms: ssh: handshake failed: read tcp 192.168.39.1:35752->192.168.39.201:22: read: connection reset by peer
	I1105 17:42:38.831755   16242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:42:38.831821   16242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 17:42:38.939670   16242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1105 17:42:38.939705   16242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1105 17:42:38.993717   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:42:38.995843   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:42:39.000929   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1105 17:42:39.000959   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1105 17:42:39.016483   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1105 17:42:39.036046   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:42:39.036386   16242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 17:42:39.036402   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1105 17:42:39.064834   16242 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:42:39.064856   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1105 17:42:39.085338   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:42:39.098827   16242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1105 17:42:39.098863   16242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1105 17:42:39.110123   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 17:42:39.123513   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1105 17:42:39.123533   16242 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1105 17:42:39.129464   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:42:39.163533   16242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 17:42:39.163562   16242 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 17:42:39.235428   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1105 17:42:39.235456   16242 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1105 17:42:39.251182   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:42:39.275268   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1105 17:42:39.275298   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1105 17:42:39.304573   16242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1105 17:42:39.304604   16242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1105 17:42:39.411870   16242 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1105 17:42:39.411895   16242 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1105 17:42:39.414585   16242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:42:39.414608   16242 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 17:42:39.430134   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1105 17:42:39.430157   16242 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1105 17:42:39.457143   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:42:39.465651   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1105 17:42:39.465682   16242 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1105 17:42:39.505770   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1105 17:42:39.505802   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1105 17:42:39.614995   16242 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:42:39.615022   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1105 17:42:39.625835   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:42:39.634157   16242 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:42:39.634179   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1105 17:42:39.682706   16242 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:42:39.682729   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1105 17:42:39.820675   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1105 17:42:39.820708   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1105 17:42:39.850700   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:42:39.860512   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:42:39.864522   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:42:40.012426   16242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1105 17:42:40.012452   16242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1105 17:42:40.343010   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1105 17:42:40.343036   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1105 17:42:40.824129   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1105 17:42:40.824172   16242 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1105 17:42:41.052795   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1105 17:42:41.052824   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1105 17:42:41.264054   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1105 17:42:41.264100   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1105 17:42:41.268274   16242 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.436480669s)
	I1105 17:42:41.268303   16242 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.436444867s)
	I1105 17:42:41.268330   16242 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1105 17:42:41.269060   16242 node_ready.go:35] waiting up to 6m0s for node "addons-320753" to be "Ready" ...
	I1105 17:42:41.271853   16242 node_ready.go:49] node "addons-320753" has status "Ready":"True"
	I1105 17:42:41.271873   16242 node_ready.go:38] duration metric: took 2.794443ms for node "addons-320753" to be "Ready" ...
	I1105 17:42:41.271880   16242 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:42:41.286272   16242 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:41.633582   16242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:42:41.633610   16242 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1105 17:42:41.793607   16242 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-320753" context rescaled to 1 replicas
	I1105 17:42:41.900396   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:42:42.710143   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.716388624s)
	I1105 17:42:42.710149   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.714279541s)
	I1105 17:42:42.710185   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710196   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710211   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710198   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710207   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.693691101s)
	I1105 17:42:42.710259   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.674190516s)
	I1105 17:42:42.710307   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710318   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710274   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710369   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710649   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:42.710697   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.710711   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.710717   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.710730   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.710743   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710753   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710770   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.710720   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710805   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.710878   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:42.710904   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.710928   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.710938   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.710946   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.711116   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.711140   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:42.711149   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:42.711345   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:42.711379   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.711385   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.711442   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.711456   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.711506   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.711521   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:42.711664   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:42.711698   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:42.711706   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:43.299860   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:45.673110   16242 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1105 17:42:45.673148   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:45.676198   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:45.676614   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:45.676641   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:45.676787   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:45.677010   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:45.677159   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:45.677301   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:45.816774   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:46.132290   16242 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1105 17:42:46.213215   16242 addons.go:234] Setting addon gcp-auth=true in "addons-320753"
	I1105 17:42:46.213266   16242 host.go:66] Checking if "addons-320753" exists ...
	I1105 17:42:46.213551   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:46.213596   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:46.230632   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I1105 17:42:46.231157   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:46.231687   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:46.231712   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:46.232060   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:46.232577   16242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 17:42:46.232635   16242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 17:42:46.247412   16242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33273
	I1105 17:42:46.247868   16242 main.go:141] libmachine: () Calling .GetVersion
	I1105 17:42:46.248337   16242 main.go:141] libmachine: Using API Version  1
	I1105 17:42:46.248361   16242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 17:42:46.248680   16242 main.go:141] libmachine: () Calling .GetMachineName
	I1105 17:42:46.248883   16242 main.go:141] libmachine: (addons-320753) Calling .GetState
	I1105 17:42:46.250615   16242 main.go:141] libmachine: (addons-320753) Calling .DriverName
	I1105 17:42:46.250853   16242 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1105 17:42:46.250881   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHHostname
	I1105 17:42:46.254057   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:46.254468   16242 main.go:141] libmachine: (addons-320753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:64:28", ip: ""} in network mk-addons-320753: {Iface:virbr1 ExpiryTime:2024-11-05 18:42:09 +0000 UTC Type:0 Mac:52:54:00:89:64:28 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:addons-320753 Clientid:01:52:54:00:89:64:28}
	I1105 17:42:46.254494   16242 main.go:141] libmachine: (addons-320753) DBG | domain addons-320753 has defined IP address 192.168.39.201 and MAC address 52:54:00:89:64:28 in network mk-addons-320753
	I1105 17:42:46.254627   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHPort
	I1105 17:42:46.254798   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHKeyPath
	I1105 17:42:46.254925   16242 main.go:141] libmachine: (addons-320753) Calling .GetSSHUsername
	I1105 17:42:46.255121   16242 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/addons-320753/id_rsa Username:docker}
	I1105 17:42:46.823271   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.737895626s)
	I1105 17:42:46.823326   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823325   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.713168748s)
	I1105 17:42:46.823340   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823357   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.693864304s)
	I1105 17:42:46.823363   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823414   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823420   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823428   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823456   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.572230892s)
	I1105 17:42:46.823486   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823495   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823529   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.366354008s)
	I1105 17:42:46.823548   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823558   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823615   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.197749552s)
	I1105 17:42:46.823621   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.823630   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.823634   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823639   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823642   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823647   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823715   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.972975222s)
	I1105 17:42:46.823743   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.823755   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.823755   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.823763   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823773   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823779   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.823789   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.823802   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.823810   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823812   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823817   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823820   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.823817   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.963277221s)
	W1105 17:42:46.823848   16242 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:42:46.823870   16242 retry.go:31] will retry after 138.066697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:42:46.823910   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.959363058s)
	I1105 17:42:46.823928   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.823938   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.824025   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.824035   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.824049   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.824052   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.824057   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.824067   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.824074   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.824082   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.824089   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.824108   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.824114   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.824124   16242 addons.go:475] Verifying addon ingress=true in "addons-320753"
	I1105 17:42:46.825276   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825289   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.825298   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.825305   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.825426   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.825448   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825452   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.825457   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.825463   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.825506   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.825525   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825531   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.825714   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.825741   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825748   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.825754   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.825756   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.825766   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.825780   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.825787   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827521   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.827537   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827547   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827550   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827555   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.827557   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827562   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.827566   16242 addons.go:475] Verifying addon registry=true in "addons-320753"
	I1105 17:42:46.824075   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.827714   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.827751   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827758   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827861   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.827892   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827898   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.827906   16242 addons.go:475] Verifying addon metrics-server=true in "addons-320753"
	I1105 17:42:46.827964   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.827974   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.829096   16242 out.go:177] * Verifying ingress addon...
	I1105 17:42:46.829099   16242 out.go:177] * Verifying registry addon...
	I1105 17:42:46.829955   16242 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-320753 service yakd-dashboard -n yakd-dashboard
	
	I1105 17:42:46.831463   16242 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1105 17:42:46.831548   16242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1105 17:42:46.874728   16242 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1105 17:42:46.874748   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:46.874960   16242 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1105 17:42:46.874997   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:46.917868   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:46.917894   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:46.918207   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:46.918229   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:46.918206   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:46.962902   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:42:47.029646   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:47.029673   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:47.029925   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:47.029942   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:47.339789   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:47.342082   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:47.876517   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:47.877984   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:48.224724   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.324272206s)
	I1105 17:42:48.224769   16242 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.973891875s)
	I1105 17:42:48.224783   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:48.224800   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:48.225069   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:48.225116   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:48.225128   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:48.225146   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:48.225158   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:48.225483   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:48.225523   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:48.225534   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:48.225544   16242 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-320753"
	I1105 17:42:48.226471   16242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:42:48.227477   16242 out.go:177] * Verifying csi-hostpath-driver addon...
	I1105 17:42:48.229013   16242 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1105 17:42:48.229945   16242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1105 17:42:48.230306   16242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1105 17:42:48.230322   16242 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1105 17:42:48.257256   16242 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:42:48.257287   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:48.432386   16242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1105 17:42:48.432436   16242 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1105 17:42:48.571158   16242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:42:48.571178   16242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1105 17:42:48.643057   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:48.643560   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:48.643593   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:48.645257   16242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:42:48.736882   16242 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:42:48.736905   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:48.836753   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:48.837025   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:49.235227   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:49.336834   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:49.337040   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:49.593589   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.630637429s)
	I1105 17:42:49.593679   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:49.593701   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:49.593954   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:49.594003   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:49.594012   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:49.594027   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:49.594038   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:49.594264   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:49.594354   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:49.594333   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:49.734240   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:49.835643   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:49.837798   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:50.266957   16242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.621667243s)
	I1105 17:42:50.267016   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:50.267028   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:50.267336   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:50.267423   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:50.267436   16242 main.go:141] libmachine: Making call to close driver server
	I1105 17:42:50.267441   16242 main.go:141] libmachine: (addons-320753) Calling .Close
	I1105 17:42:50.267385   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:50.267683   16242 main.go:141] libmachine: Successfully made call to close driver server
	I1105 17:42:50.267711   16242 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 17:42:50.267731   16242 main.go:141] libmachine: (addons-320753) DBG | Closing plugin on server side
	I1105 17:42:50.269277   16242 addons.go:475] Verifying addon gcp-auth=true in "addons-320753"
	I1105 17:42:50.270615   16242 out.go:177] * Verifying gcp-auth addon...
	I1105 17:42:50.272317   16242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1105 17:42:50.276491   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:50.331287   16242 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1105 17:42:50.331312   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:50.377841   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:50.379460   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:50.735645   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:50.776530   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:50.794327   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:50.836976   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:50.837648   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:51.234194   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:51.275258   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:51.335749   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:51.336014   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:51.735421   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:51.775611   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:51.836086   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:51.836139   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:52.235326   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:52.275333   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:52.336692   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:52.336903   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:52.736046   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:52.776676   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:52.836413   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:52.836566   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:53.234811   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:53.276119   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:53.292238   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:53.335683   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:53.335951   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:53.769023   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:53.775288   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:53.836112   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:53.836730   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:54.235475   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:54.275345   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:54.336377   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:54.336537   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:54.735997   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:54.776269   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:54.836185   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:54.837552   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:55.234424   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:55.275538   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:55.292306   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:55.336423   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:55.337175   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:55.734667   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:55.776047   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:55.835750   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:55.835804   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:56.235572   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:56.275623   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:56.335640   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:56.335872   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:56.735321   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:56.775389   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:56.835252   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:56.836011   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:57.234933   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:57.275869   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:57.335178   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:57.335956   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:57.734759   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:57.776045   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:57.792995   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:57.835707   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:57.836176   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:58.234717   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:58.276284   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:58.336492   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:58.336808   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:58.734536   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:58.775801   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:58.835025   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:58.836260   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:59.235499   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:59.276959   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:59.336252   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:59.338089   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:59.734617   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:59.775238   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:59.836920   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:59.837454   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:00.234893   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:00.275485   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:00.292287   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:00.336371   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:00.336676   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:00.733707   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:00.776336   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:00.835145   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:00.835758   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:01.235488   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:01.275432   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:01.337132   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:01.337508   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:01.907767   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:01.907892   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:01.908389   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:01.908571   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:02.234946   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:02.276157   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:02.292555   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:02.336126   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:02.336536   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:02.735728   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:02.787821   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:02.842833   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:02.843031   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:03.234905   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:03.276407   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:03.335608   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:03.336375   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:03.735388   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:03.775578   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:03.835768   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:03.836327   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:04.424542   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:04.424689   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:04.425038   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:04.426802   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:04.427187   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:04.734396   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:04.776538   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:04.835880   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:04.836343   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:05.233949   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:05.276140   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:05.335791   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:05.336588   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:05.734061   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:05.776329   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:05.836032   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:05.836409   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:06.742384   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:06.742635   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:06.742810   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:06.743372   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:06.745170   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:06.746926   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:06.775729   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:06.836589   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:06.837260   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:07.234842   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:07.275788   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:07.335492   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:07.336574   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:07.734537   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:07.786311   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:07.839865   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:07.841620   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:08.235593   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:08.276478   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:08.335745   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:08.335953   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:08.734462   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:08.775753   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:08.791674   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:08.835533   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:08.836021   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:09.236428   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:09.275984   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:09.336100   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:09.336392   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:09.734604   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:09.775645   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:09.835851   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:09.836253   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:10.235035   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:10.276329   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:10.334739   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:10.335401   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:10.734878   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:10.776739   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:10.791881   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:10.835352   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:10.835541   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:11.234803   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:11.275748   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:11.336792   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:11.337187   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:11.735502   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:11.776022   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:11.837307   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:11.837775   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:12.235757   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:12.276143   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:12.335085   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:12.335459   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:12.735550   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:12.775748   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:12.792324   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:12.836177   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:12.836388   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:13.234745   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:13.275326   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:13.335383   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:13.336454   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:13.734339   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:13.776408   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:13.835862   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:13.836316   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:14.233665   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:14.276099   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:14.335771   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:14.336042   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:14.735327   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:14.775256   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:14.793855   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:14.835810   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:14.835879   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:15.235569   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:15.275760   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:15.336516   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:15.336557   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:15.734818   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:15.775762   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:15.835929   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:15.836626   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:16.234842   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:16.276090   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:16.336038   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:16.336403   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:16.733561   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:16.775721   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:16.837265   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:16.837895   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:17.234989   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:17.275933   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:17.292950   16242 pod_ready.go:103] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:17.334893   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:17.336385   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:17.734468   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:17.775885   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:17.835670   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:17.836007   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:18.237865   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:18.276172   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:18.292198   16242 pod_ready.go:93] pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.292225   16242 pod_ready.go:82] duration metric: took 37.005905525s for pod "amd-gpu-device-plugin-h5b9p" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.292242   16242 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-67h67" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.293922   16242 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-67h67" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-67h67" not found
	I1105 17:43:18.293948   16242 pod_ready.go:82] duration metric: took 1.697844ms for pod "coredns-7c65d6cfc9-67h67" in "kube-system" namespace to be "Ready" ...
	E1105 17:43:18.293960   16242 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-67h67" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-67h67" not found
	I1105 17:43:18.293970   16242 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cttxl" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.298198   16242 pod_ready.go:93] pod "coredns-7c65d6cfc9-cttxl" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.298222   16242 pod_ready.go:82] duration metric: took 4.243824ms for pod "coredns-7c65d6cfc9-cttxl" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.298234   16242 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.302198   16242 pod_ready.go:93] pod "etcd-addons-320753" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.302219   16242 pod_ready.go:82] duration metric: took 3.976888ms for pod "etcd-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.302226   16242 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.307705   16242 pod_ready.go:93] pod "kube-apiserver-addons-320753" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.307724   16242 pod_ready.go:82] duration metric: took 5.49182ms for pod "kube-apiserver-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.307732   16242 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.335237   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:18.335455   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:18.489772   16242 pod_ready.go:93] pod "kube-controller-manager-addons-320753" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.489794   16242 pod_ready.go:82] duration metric: took 182.055769ms for pod "kube-controller-manager-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.489805   16242 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-24n9l" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.734369   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:18.775914   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:18.836651   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:18.836707   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:18.890045   16242 pod_ready.go:93] pod "kube-proxy-24n9l" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:18.890069   16242 pod_ready.go:82] duration metric: took 400.25624ms for pod "kube-proxy-24n9l" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:18.890082   16242 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:19.235572   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:19.290301   16242 pod_ready.go:93] pod "kube-scheduler-addons-320753" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:19.290329   16242 pod_ready.go:82] duration metric: took 400.238241ms for pod "kube-scheduler-addons-320753" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:19.290343   16242 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rgxmq" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:19.335627   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:19.336070   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:19.336184   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:19.690855   16242 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rgxmq" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:19.690879   16242 pod_ready.go:82] duration metric: took 400.528046ms for pod "nvidia-device-plugin-daemonset-rgxmq" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:19.690887   16242 pod_ready.go:39] duration metric: took 38.418998496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:43:19.690904   16242 api_server.go:52] waiting for apiserver process to appear ...
	I1105 17:43:19.690992   16242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 17:43:19.710532   16242 api_server.go:72] duration metric: took 41.286043118s to wait for apiserver process to appear ...
	I1105 17:43:19.710557   16242 api_server.go:88] waiting for apiserver healthz status ...
	I1105 17:43:19.710575   16242 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I1105 17:43:19.714752   16242 api_server.go:279] https://192.168.39.201:8443/healthz returned 200:
	ok
	I1105 17:43:19.715745   16242 api_server.go:141] control plane version: v1.31.2
	I1105 17:43:19.715766   16242 api_server.go:131] duration metric: took 5.203361ms to wait for apiserver health ...
	I1105 17:43:19.715774   16242 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 17:43:19.734449   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:19.776054   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:19.835917   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:19.836229   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:19.896414   16242 system_pods.go:59] 18 kube-system pods found
	I1105 17:43:19.896448   16242 system_pods.go:61] "amd-gpu-device-plugin-h5b9p" [012ac43a-bb0b-4a85-91d7-47b7b36eb7c3] Running
	I1105 17:43:19.896457   16242 system_pods.go:61] "coredns-7c65d6cfc9-cttxl" [2478e920-f380-4190-bc39-00c34d84a86f] Running
	I1105 17:43:19.896466   16242 system_pods.go:61] "csi-hostpath-attacher-0" [07c0442e-f739-45c1-bce1-70dba665cbba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1105 17:43:19.896474   16242 system_pods.go:61] "csi-hostpath-resizer-0" [53cca88c-38b8-486f-ac5b-b155d7a0fcbd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1105 17:43:19.896484   16242 system_pods.go:61] "csi-hostpathplugin-ssdqg" [55586e10-8074-4b16-8197-d3b8dfeb30fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1105 17:43:19.896491   16242 system_pods.go:61] "etcd-addons-320753" [f97557d4-2f51-4ec7-bd14-c47c64cee30b] Running
	I1105 17:43:19.896497   16242 system_pods.go:61] "kube-apiserver-addons-320753" [a127d10c-37ed-4d05-a8f7-f8e855bcf716] Running
	I1105 17:43:19.896506   16242 system_pods.go:61] "kube-controller-manager-addons-320753" [0ddb9a92-e16b-45ea-9eb2-2033d2795283] Running
	I1105 17:43:19.896516   16242 system_pods.go:61] "kube-ingress-dns-minikube" [1eba0773-5303-4096-98b4-0e8258855ad4] Running
	I1105 17:43:19.896522   16242 system_pods.go:61] "kube-proxy-24n9l" [64cb0df5-d57b-4782-bae7-4ac5639dc01e] Running
	I1105 17:43:19.896527   16242 system_pods.go:61] "kube-scheduler-addons-320753" [3de149a1-916c-48c9-8f62-f76e0c1682e5] Running
	I1105 17:43:19.896536   16242 system_pods.go:61] "metrics-server-84c5f94fbc-khd9b" [5c9668b9-1b38-4b29-a16b-750ee7a74276] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 17:43:19.896542   16242 system_pods.go:61] "nvidia-device-plugin-daemonset-rgxmq" [20281175-a7ec-44e4-a0f9-e0dd96dfe10c] Running
	I1105 17:43:19.896551   16242 system_pods.go:61] "registry-66c9cd494c-xtz7j" [549ed7b1-2983-4fca-8715-25afc280c616] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1105 17:43:19.896562   16242 system_pods.go:61] "registry-proxy-k2wqh" [b9f4e07d-8955-4605-8ecd-360952c67ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1105 17:43:19.896571   16242 system_pods.go:61] "snapshot-controller-56fcc65765-6rhm5" [955e4299-ba79-4530-8ebe-78c35525b9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1105 17:43:19.896578   16242 system_pods.go:61] "snapshot-controller-56fcc65765-kh6t8" [24c4c41d-37d5-45b9-a1db-f0a70d94983b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1105 17:43:19.896584   16242 system_pods.go:61] "storage-provisioner" [1ee0e5cc-73a4-44dc-9637-8dbfd1e52030] Running
	I1105 17:43:19.896592   16242 system_pods.go:74] duration metric: took 180.811688ms to wait for pod list to return data ...
	I1105 17:43:19.896603   16242 default_sa.go:34] waiting for default service account to be created ...
	I1105 17:43:20.090566   16242 default_sa.go:45] found service account: "default"
	I1105 17:43:20.090591   16242 default_sa.go:55] duration metric: took 193.98171ms for default service account to be created ...
	I1105 17:43:20.090603   16242 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 17:43:20.234897   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:20.275567   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:20.298643   16242 system_pods.go:86] 18 kube-system pods found
	I1105 17:43:20.298681   16242 system_pods.go:89] "amd-gpu-device-plugin-h5b9p" [012ac43a-bb0b-4a85-91d7-47b7b36eb7c3] Running
	I1105 17:43:20.298690   16242 system_pods.go:89] "coredns-7c65d6cfc9-cttxl" [2478e920-f380-4190-bc39-00c34d84a86f] Running
	I1105 17:43:20.298700   16242 system_pods.go:89] "csi-hostpath-attacher-0" [07c0442e-f739-45c1-bce1-70dba665cbba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1105 17:43:20.298710   16242 system_pods.go:89] "csi-hostpath-resizer-0" [53cca88c-38b8-486f-ac5b-b155d7a0fcbd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1105 17:43:20.298720   16242 system_pods.go:89] "csi-hostpathplugin-ssdqg" [55586e10-8074-4b16-8197-d3b8dfeb30fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1105 17:43:20.298731   16242 system_pods.go:89] "etcd-addons-320753" [f97557d4-2f51-4ec7-bd14-c47c64cee30b] Running
	I1105 17:43:20.298737   16242 system_pods.go:89] "kube-apiserver-addons-320753" [a127d10c-37ed-4d05-a8f7-f8e855bcf716] Running
	I1105 17:43:20.298746   16242 system_pods.go:89] "kube-controller-manager-addons-320753" [0ddb9a92-e16b-45ea-9eb2-2033d2795283] Running
	I1105 17:43:20.298756   16242 system_pods.go:89] "kube-ingress-dns-minikube" [1eba0773-5303-4096-98b4-0e8258855ad4] Running
	I1105 17:43:20.298761   16242 system_pods.go:89] "kube-proxy-24n9l" [64cb0df5-d57b-4782-bae7-4ac5639dc01e] Running
	I1105 17:43:20.298769   16242 system_pods.go:89] "kube-scheduler-addons-320753" [3de149a1-916c-48c9-8f62-f76e0c1682e5] Running
	I1105 17:43:20.298780   16242 system_pods.go:89] "metrics-server-84c5f94fbc-khd9b" [5c9668b9-1b38-4b29-a16b-750ee7a74276] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 17:43:20.298788   16242 system_pods.go:89] "nvidia-device-plugin-daemonset-rgxmq" [20281175-a7ec-44e4-a0f9-e0dd96dfe10c] Running
	I1105 17:43:20.298796   16242 system_pods.go:89] "registry-66c9cd494c-xtz7j" [549ed7b1-2983-4fca-8715-25afc280c616] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1105 17:43:20.298807   16242 system_pods.go:89] "registry-proxy-k2wqh" [b9f4e07d-8955-4605-8ecd-360952c67ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1105 17:43:20.298819   16242 system_pods.go:89] "snapshot-controller-56fcc65765-6rhm5" [955e4299-ba79-4530-8ebe-78c35525b9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1105 17:43:20.298831   16242 system_pods.go:89] "snapshot-controller-56fcc65765-kh6t8" [24c4c41d-37d5-45b9-a1db-f0a70d94983b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1105 17:43:20.298839   16242 system_pods.go:89] "storage-provisioner" [1ee0e5cc-73a4-44dc-9637-8dbfd1e52030] Running
	I1105 17:43:20.298852   16242 system_pods.go:126] duration metric: took 208.242321ms to wait for k8s-apps to be running ...
	I1105 17:43:20.298869   16242 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 17:43:20.298924   16242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 17:43:20.337714   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:20.338291   16242 system_svc.go:56] duration metric: took 39.420489ms WaitForService to wait for kubelet
	I1105 17:43:20.338316   16242 kubeadm.go:582] duration metric: took 41.913831742s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:43:20.338338   16242 node_conditions.go:102] verifying NodePressure condition ...
	I1105 17:43:20.338867   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:20.490641   16242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 17:43:20.490679   16242 node_conditions.go:123] node cpu capacity is 2
	I1105 17:43:20.490694   16242 node_conditions.go:105] duration metric: took 152.350003ms to run NodePressure ...
	I1105 17:43:20.490710   16242 start.go:241] waiting for startup goroutines ...
	I1105 17:43:20.735417   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:20.776609   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:20.836737   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:20.837483   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:21.516893   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:21.517444   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:21.517601   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:21.517622   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:21.734697   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:21.775267   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:21.836927   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:21.837000   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:22.237498   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:22.276242   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:22.336335   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:22.336712   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:22.735311   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:22.777478   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:22.835918   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:22.836713   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:23.235792   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:23.275882   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:23.335114   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:23.335821   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:23.735214   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:23.776555   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:23.836855   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:23.837155   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:24.234587   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:24.276213   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:24.335551   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:24.335889   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:24.735682   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:24.836925   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:24.837753   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:43:24.838084   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:25.235351   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:25.279001   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:25.335340   16242 kapi.go:107] duration metric: took 38.503790715s to wait for kubernetes.io/minikube-addons=registry ...
	I1105 17:43:25.335381   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:25.734751   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:25.775712   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:25.836377   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:26.235414   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:26.277511   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:26.336378   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:26.735178   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:26.775642   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:26.836374   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:27.277781   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:27.280021   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:27.374319   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:27.736322   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:27.776499   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:27.835641   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:28.235274   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:28.277209   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:28.335634   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:28.735415   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:28.776430   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:28.836392   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:29.235685   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:29.335167   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:29.335880   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:29.733802   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:29.776095   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:29.835570   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:30.234679   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:30.275938   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:30.335948   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:30.735107   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:30.775934   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:30.835899   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:31.235286   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:31.275401   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:31.336311   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:31.733931   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:31.775976   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:31.835019   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:32.617048   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:32.617408   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:32.617491   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:32.734595   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:32.775721   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:32.836126   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:33.234844   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:33.275260   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:33.341650   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:33.733926   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:33.776135   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:33.835365   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:34.234778   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:34.275583   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:34.341124   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:34.735148   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:34.775321   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:34.835081   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:35.235237   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:35.275971   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:35.335403   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:35.734779   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:35.776149   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:35.836383   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:36.235572   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:36.276197   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:36.335465   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:36.734433   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:36.775848   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:36.836575   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:37.238267   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:37.287482   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:37.343317   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:37.736234   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:37.776391   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:37.837067   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:38.235433   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:38.276266   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:38.336785   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:38.734784   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:38.776063   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:38.840576   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:39.235255   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:39.276266   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:39.335711   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:39.734964   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:39.777541   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:39.836094   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:40.235381   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:40.275778   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:40.335081   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:40.734912   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:40.775976   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:40.835314   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:41.234208   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:41.275657   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:41.336259   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:41.736488   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:41.835170   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:41.836362   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:42.233869   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:42.276058   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:42.336029   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:42.735010   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:42.834504   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:42.836741   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:43.238878   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:43.276158   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:43.335593   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:43.735236   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:43.776408   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:43.835658   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:44.235012   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:44.276738   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:44.336173   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:44.735149   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:44.777270   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:44.837506   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:45.235439   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:45.275676   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:45.335810   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:45.734610   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:45.775964   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:45.836061   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:46.234278   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:46.275667   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:46.336084   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:46.734821   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:46.777538   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:46.837517   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:47.234335   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:47.275245   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:47.335841   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:47.888925   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:47.890110   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:47.896165   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:48.236423   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:48.277204   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:48.340814   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:48.742310   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:48.781231   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:48.882171   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:49.236084   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:49.277147   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:49.336915   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:49.736596   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:49.776208   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:49.838031   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:50.237134   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:50.276784   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:50.335605   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:50.734370   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:50.775840   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:50.835453   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:51.241678   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:51.278304   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:51.335932   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:51.734656   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:51.834642   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:51.835988   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:52.235187   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:52.334825   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:52.337329   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:52.735471   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:52.778320   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:52.837125   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:53.235745   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:53.277211   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:53.335819   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:53.735030   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:53.775490   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:53.836269   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:54.599314   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:54.599744   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:54.600519   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:54.734695   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:54.775281   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:54.835210   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:55.238965   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:55.341963   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:55.342220   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:55.734333   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:55.775480   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:55.835484   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:56.234809   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:56.275859   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:56.335335   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:56.734076   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:56.776546   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:57.220875   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:57.320037   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:57.320089   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:57.335828   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:57.735134   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:57.776893   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:57.835745   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:58.234653   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:58.275544   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:58.336238   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:58.735567   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:58.779153   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:58.835910   16242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:43:59.236581   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:59.275989   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:59.335442   16242 kapi.go:107] duration metric: took 1m12.50397447s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1105 17:43:59.734474   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:59.776236   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:00.389254   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:00.488820   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:00.735574   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:00.775774   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:01.234935   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:01.276555   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:01.734870   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:01.776142   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:02.234589   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:02.276661   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:02.734911   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:02.776353   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:03.234482   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:03.275668   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:03.735063   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:03.776470   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:44:04.236117   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:04.334267   16242 kapi.go:107] duration metric: took 1m14.061946319s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1105 17:44:04.336055   16242 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-320753 cluster.
	I1105 17:44:04.337817   16242 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1105 17:44:04.339179   16242 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1105 17:44:04.735117   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:05.235358   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:05.739476   16242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:44:06.234866   16242 kapi.go:107] duration metric: took 1m18.004919144s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1105 17:44:06.236747   16242 out.go:177] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, storage-provisioner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1105 17:44:06.238082   16242 addons.go:510] duration metric: took 1m27.813567554s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns storage-provisioner nvidia-device-plugin inspektor-gadget metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1105 17:44:06.238126   16242 start.go:246] waiting for cluster config update ...
	I1105 17:44:06.238149   16242 start.go:255] writing updated cluster config ...
	I1105 17:44:06.238736   16242 ssh_runner.go:195] Run: rm -f paused
	I1105 17:44:06.288800   16242 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 17:44:06.290635   16242 out.go:177] * Done! kubectl is now configured to use "addons-320753" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.447144904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829025447113898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0dcfdc4-dc0b-460f-8f00-d6ef69363f3f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.447910711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=349834f4-9004-48c6-bbb4-3c80d8338f33 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.447973612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=349834f4-9004-48c6-bbb4-3c80d8338f33 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.448370228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:922b162f702795854b468be3394a4ed21a2ed747f05089dfddb0e210cdac1f28,PodSandboxId:88d7bb213ef78226c56d1febde4b87c2f26e7503cde18657a3cb8e1d492fe0d1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730828850962849058,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-gmrtj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69f94676-d85f-4400-8899-ebaf3c04f092,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cec3dd2fd1269de0f11405b4100a2e7acb250053135b5b6d4035614dfbaaed5d,PodSandboxId:1346002c1f6a74c3ab7eb285587c90ab9d33d98534ac34753b88204fc0cb2a17,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730828710663714183,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4adce59-2101-44a5-bcc1-53c27718456c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9fe762f3082313fe72e3170b77aa50956917693f6b18b58ce5c6e39ce86fa4,PodSandboxId:741d09d08bf73084bf0e9117584aac959f61b591d7fbffc766483f1f5ca3b8af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730828650675769940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c68727b-d745-4759-8
5fb-537736d0c04a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33323a81892f4375c6cc05afd9b326f6e53f4ac782a0313cf67e8e715e34cd7,PodSandboxId:7330784d967378fb460a3ac8683e62b3425e9db40c7b1a80d51a154afda5639a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730828606311775112,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-khd9b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 5c9668b9-1b38-4b29-a16b-750ee7a74276,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0487aadaec9bd775dfd01e3d25e94f79418bc3e4e7b5297afb76b628a76f9131,PodSandboxId:1784a3a31f6659fdefd4e533e6987064775fc9f5ce8ac1b7e3473eb8dbeefec4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730828597546118608,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h5b9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012ac43a-bb0b-4a85-91d7-47b7b36eb7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee,PodSandboxId:66f71f6fb3fc29789850be79773283d3391863635e6a6eda20082662161df53a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730828563822400290,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee0e5cc-73a4-44dc-9637-8dbfd1e52030,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1,PodSandboxId:f784a16ce173d9967cb1ebbb97614acab17f82a7c7b7bed794b86af9249e2446,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730828562291340879,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cttxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2478e920-f380-4190-bc39-00c34d84a86f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346,PodSandboxId:b2f5ff6e95dfeb8852a5bbd53ee22940a099fa2a3cb48edc6b4bd38fef9c3f10,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730828559575755109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24n9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cb0df5-d57b-4782-bae7-4ac5639dc01e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a,PodSandboxId:356da5ed5f56d6cdf965d434988e98f2ea4c48d52ac8d905b9415e188934147a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c996
0544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730828548475236815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1401c4598f2e3dfc80febc83d26bd72,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8,PodSandboxId:718e09d3d1bf36d15904b46155f0c6aaeda36ff2881306a06fc43a8771b9e61a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af2
6f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730828548464806702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d09859585694c955c161417e3cd2061,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c,PodSandboxId:99b3e36649d9d135df7afae49b460f9b918a70235f490ac024c5232e34ffeb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730828548459917560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2510abf723755cf16e6c080513cf1135,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d,PodSandboxId:c6b5c3d1a21b77bb05b0336bff301bcbb0cbda0b76d745f50b8b1196ee6fead7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd28
56,State:CONTAINER_RUNNING,CreatedAt:1730828548455543238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d481b44bde15a13310363b908cd76a45,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=349834f4-9004-48c6-bbb4-3c80d8338f33 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.484468188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4afdf250-0cdb-4ef6-a58b-fd151236e6b7 name=/runtime.v1.RuntimeService/Version
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.484541619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4afdf250-0cdb-4ef6-a58b-fd151236e6b7 name=/runtime.v1.RuntimeService/Version
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.485967375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73e4d6db-aa91-4e4b-be3b-f46e58ae7534 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.487390372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829025487358859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73e4d6db-aa91-4e4b-be3b-f46e58ae7534 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.487958489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df31c165-e7aa-4084-8946-47c8b2459b65 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.488019174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df31c165-e7aa-4084-8946-47c8b2459b65 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.488331518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:922b162f702795854b468be3394a4ed21a2ed747f05089dfddb0e210cdac1f28,PodSandboxId:88d7bb213ef78226c56d1febde4b87c2f26e7503cde18657a3cb8e1d492fe0d1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730828850962849058,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-gmrtj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69f94676-d85f-4400-8899-ebaf3c04f092,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cec3dd2fd1269de0f11405b4100a2e7acb250053135b5b6d4035614dfbaaed5d,PodSandboxId:1346002c1f6a74c3ab7eb285587c90ab9d33d98534ac34753b88204fc0cb2a17,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730828710663714183,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4adce59-2101-44a5-bcc1-53c27718456c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9fe762f3082313fe72e3170b77aa50956917693f6b18b58ce5c6e39ce86fa4,PodSandboxId:741d09d08bf73084bf0e9117584aac959f61b591d7fbffc766483f1f5ca3b8af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730828650675769940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c68727b-d745-4759-8
5fb-537736d0c04a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33323a81892f4375c6cc05afd9b326f6e53f4ac782a0313cf67e8e715e34cd7,PodSandboxId:7330784d967378fb460a3ac8683e62b3425e9db40c7b1a80d51a154afda5639a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730828606311775112,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-khd9b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 5c9668b9-1b38-4b29-a16b-750ee7a74276,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0487aadaec9bd775dfd01e3d25e94f79418bc3e4e7b5297afb76b628a76f9131,PodSandboxId:1784a3a31f6659fdefd4e533e6987064775fc9f5ce8ac1b7e3473eb8dbeefec4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730828597546118608,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h5b9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012ac43a-bb0b-4a85-91d7-47b7b36eb7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee,PodSandboxId:66f71f6fb3fc29789850be79773283d3391863635e6a6eda20082662161df53a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730828563822400290,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee0e5cc-73a4-44dc-9637-8dbfd1e52030,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1,PodSandboxId:f784a16ce173d9967cb1ebbb97614acab17f82a7c7b7bed794b86af9249e2446,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730828562291340879,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cttxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2478e920-f380-4190-bc39-00c34d84a86f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346,PodSandboxId:b2f5ff6e95dfeb8852a5bbd53ee22940a099fa2a3cb48edc6b4bd38fef9c3f10,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730828559575755109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24n9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cb0df5-d57b-4782-bae7-4ac5639dc01e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a,PodSandboxId:356da5ed5f56d6cdf965d434988e98f2ea4c48d52ac8d905b9415e188934147a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c996
0544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730828548475236815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1401c4598f2e3dfc80febc83d26bd72,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8,PodSandboxId:718e09d3d1bf36d15904b46155f0c6aaeda36ff2881306a06fc43a8771b9e61a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af2
6f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730828548464806702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d09859585694c955c161417e3cd2061,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c,PodSandboxId:99b3e36649d9d135df7afae49b460f9b918a70235f490ac024c5232e34ffeb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730828548459917560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2510abf723755cf16e6c080513cf1135,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d,PodSandboxId:c6b5c3d1a21b77bb05b0336bff301bcbb0cbda0b76d745f50b8b1196ee6fead7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd28
56,State:CONTAINER_RUNNING,CreatedAt:1730828548455543238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d481b44bde15a13310363b908cd76a45,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df31c165-e7aa-4084-8946-47c8b2459b65 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.525269881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9e34995-25f6-4d45-a40c-8f6cb749513d name=/runtime.v1.RuntimeService/Version
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.525364407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9e34995-25f6-4d45-a40c-8f6cb749513d name=/runtime.v1.RuntimeService/Version
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.526442889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41c43310-3526-42ac-ad1b-eb00292dd332 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.527774822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829025527748460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41c43310-3526-42ac-ad1b-eb00292dd332 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.528285195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e333bf5-eff7-41b2-8f76-3465bbe3c668 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.528352911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e333bf5-eff7-41b2-8f76-3465bbe3c668 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.528630074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:922b162f702795854b468be3394a4ed21a2ed747f05089dfddb0e210cdac1f28,PodSandboxId:88d7bb213ef78226c56d1febde4b87c2f26e7503cde18657a3cb8e1d492fe0d1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730828850962849058,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-gmrtj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69f94676-d85f-4400-8899-ebaf3c04f092,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cec3dd2fd1269de0f11405b4100a2e7acb250053135b5b6d4035614dfbaaed5d,PodSandboxId:1346002c1f6a74c3ab7eb285587c90ab9d33d98534ac34753b88204fc0cb2a17,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730828710663714183,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4adce59-2101-44a5-bcc1-53c27718456c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9fe762f3082313fe72e3170b77aa50956917693f6b18b58ce5c6e39ce86fa4,PodSandboxId:741d09d08bf73084bf0e9117584aac959f61b591d7fbffc766483f1f5ca3b8af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730828650675769940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c68727b-d745-4759-8
5fb-537736d0c04a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33323a81892f4375c6cc05afd9b326f6e53f4ac782a0313cf67e8e715e34cd7,PodSandboxId:7330784d967378fb460a3ac8683e62b3425e9db40c7b1a80d51a154afda5639a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730828606311775112,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-khd9b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 5c9668b9-1b38-4b29-a16b-750ee7a74276,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0487aadaec9bd775dfd01e3d25e94f79418bc3e4e7b5297afb76b628a76f9131,PodSandboxId:1784a3a31f6659fdefd4e533e6987064775fc9f5ce8ac1b7e3473eb8dbeefec4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730828597546118608,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h5b9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012ac43a-bb0b-4a85-91d7-47b7b36eb7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee,PodSandboxId:66f71f6fb3fc29789850be79773283d3391863635e6a6eda20082662161df53a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730828563822400290,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee0e5cc-73a4-44dc-9637-8dbfd1e52030,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1,PodSandboxId:f784a16ce173d9967cb1ebbb97614acab17f82a7c7b7bed794b86af9249e2446,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730828562291340879,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cttxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2478e920-f380-4190-bc39-00c34d84a86f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346,PodSandboxId:b2f5ff6e95dfeb8852a5bbd53ee22940a099fa2a3cb48edc6b4bd38fef9c3f10,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730828559575755109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24n9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cb0df5-d57b-4782-bae7-4ac5639dc01e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a,PodSandboxId:356da5ed5f56d6cdf965d434988e98f2ea4c48d52ac8d905b9415e188934147a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c996
0544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730828548475236815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1401c4598f2e3dfc80febc83d26bd72,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8,PodSandboxId:718e09d3d1bf36d15904b46155f0c6aaeda36ff2881306a06fc43a8771b9e61a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af2
6f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730828548464806702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d09859585694c955c161417e3cd2061,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c,PodSandboxId:99b3e36649d9d135df7afae49b460f9b918a70235f490ac024c5232e34ffeb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730828548459917560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2510abf723755cf16e6c080513cf1135,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d,PodSandboxId:c6b5c3d1a21b77bb05b0336bff301bcbb0cbda0b76d745f50b8b1196ee6fead7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd28
56,State:CONTAINER_RUNNING,CreatedAt:1730828548455543238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d481b44bde15a13310363b908cd76a45,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e333bf5-eff7-41b2-8f76-3465bbe3c668 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.559828074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cc866ae-c423-4164-8ed1-a8c2769259b4 name=/runtime.v1.RuntimeService/Version
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.559915505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cc866ae-c423-4164-8ed1-a8c2769259b4 name=/runtime.v1.RuntimeService/Version
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.560943270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92e88a83-f73f-4a06-b894-8ae437e90c27 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.562277388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829025562251566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92e88a83-f73f-4a06-b894-8ae437e90c27 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.562909394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28328d12-e075-4728-9155-2eb438d0aaf2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.562980562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28328d12-e075-4728-9155-2eb438d0aaf2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 17:50:25 addons-320753 crio[660]: time="2024-11-05 17:50:25.563323386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:922b162f702795854b468be3394a4ed21a2ed747f05089dfddb0e210cdac1f28,PodSandboxId:88d7bb213ef78226c56d1febde4b87c2f26e7503cde18657a3cb8e1d492fe0d1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730828850962849058,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-gmrtj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 69f94676-d85f-4400-8899-ebaf3c04f092,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cec3dd2fd1269de0f11405b4100a2e7acb250053135b5b6d4035614dfbaaed5d,PodSandboxId:1346002c1f6a74c3ab7eb285587c90ab9d33d98534ac34753b88204fc0cb2a17,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730828710663714183,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4adce59-2101-44a5-bcc1-53c27718456c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9fe762f3082313fe72e3170b77aa50956917693f6b18b58ce5c6e39ce86fa4,PodSandboxId:741d09d08bf73084bf0e9117584aac959f61b591d7fbffc766483f1f5ca3b8af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730828650675769940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c68727b-d745-4759-8
5fb-537736d0c04a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33323a81892f4375c6cc05afd9b326f6e53f4ac782a0313cf67e8e715e34cd7,PodSandboxId:7330784d967378fb460a3ac8683e62b3425e9db40c7b1a80d51a154afda5639a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730828606311775112,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-khd9b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 5c9668b9-1b38-4b29-a16b-750ee7a74276,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0487aadaec9bd775dfd01e3d25e94f79418bc3e4e7b5297afb76b628a76f9131,PodSandboxId:1784a3a31f6659fdefd4e533e6987064775fc9f5ce8ac1b7e3473eb8dbeefec4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730828597546118608,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h5b9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012ac43a-bb0b-4a85-91d7-47b7b36eb7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee,PodSandboxId:66f71f6fb3fc29789850be79773283d3391863635e6a6eda20082662161df53a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730828563822400290,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee0e5cc-73a4-44dc-9637-8dbfd1e52030,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1,PodSandboxId:f784a16ce173d9967cb1ebbb97614acab17f82a7c7b7bed794b86af9249e2446,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730828562291340879,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cttxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2478e920-f380-4190-bc39-00c34d84a86f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346,PodSandboxId:b2f5ff6e95dfeb8852a5bbd53ee22940a099fa2a3cb48edc6b4bd38fef9c3f10,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730828559575755109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24n9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cb0df5-d57b-4782-bae7-4ac5639dc01e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a,PodSandboxId:356da5ed5f56d6cdf965d434988e98f2ea4c48d52ac8d905b9415e188934147a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c996
0544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730828548475236815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1401c4598f2e3dfc80febc83d26bd72,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8,PodSandboxId:718e09d3d1bf36d15904b46155f0c6aaeda36ff2881306a06fc43a8771b9e61a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af2
6f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730828548464806702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d09859585694c955c161417e3cd2061,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c,PodSandboxId:99b3e36649d9d135df7afae49b460f9b918a70235f490ac024c5232e34ffeb07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730828548459917560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2510abf723755cf16e6c080513cf1135,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d,PodSandboxId:c6b5c3d1a21b77bb05b0336bff301bcbb0cbda0b76d745f50b8b1196ee6fead7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd28
56,State:CONTAINER_RUNNING,CreatedAt:1730828548455543238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320753,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d481b44bde15a13310363b908cd76a45,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28328d12-e075-4728-9155-2eb438d0aaf2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	922b162f70279       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   88d7bb213ef78       hello-world-app-55bf9c44b4-gmrtj
	cec3dd2fd1269       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   1346002c1f6a7       nginx
	6e9fe762f3082       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   741d09d08bf73       busybox
	d33323a81892f       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   6 minutes ago       Running             metrics-server            0                   7330784d96737       metrics-server-84c5f94fbc-khd9b
	0487aadaec9bd       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                7 minutes ago       Running             amd-gpu-device-plugin     0                   1784a3a31f665       amd-gpu-device-plugin-h5b9p
	c7bda9b1ee1f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   66f71f6fb3fc2       storage-provisioner
	3adb7b81c7581       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   f784a16ce173d       coredns-7c65d6cfc9-cttxl
	feda2f7d89c62       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   b2f5ff6e95dfe       kube-proxy-24n9l
	cadd2623fa524       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        7 minutes ago       Running             kube-apiserver            0                   356da5ed5f56d       kube-apiserver-addons-320753
	9d1b0135e5cf4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        7 minutes ago       Running             kube-controller-manager   0                   718e09d3d1bf3       kube-controller-manager-addons-320753
	a8df3d0592491       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   99b3e36649d9d       etcd-addons-320753
	467f16dcbd4a7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        7 minutes ago       Running             kube-scheduler            0                   c6b5c3d1a21b7       kube-scheduler-addons-320753
	
	
	==> coredns [3adb7b81c758126188ee64df255d65d9c40226620bc2f22e7229bbcbfdc5e6f1] <==
	[INFO] 10.244.0.22:53193 - 54546 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070489s
	[INFO] 10.244.0.22:53193 - 35164 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068204s
	[INFO] 10.244.0.22:53193 - 44996 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000114361s
	[INFO] 10.244.0.22:53193 - 38102 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000100419s
	[INFO] 10.244.0.22:37084 - 3407 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00008792s
	[INFO] 10.244.0.22:37084 - 51980 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000081922s
	[INFO] 10.244.0.22:37084 - 10303 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007397s
	[INFO] 10.244.0.22:37084 - 32276 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070447s
	[INFO] 10.244.0.22:37084 - 30114 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074817s
	[INFO] 10.244.0.22:37084 - 51040 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045574s
	[INFO] 10.244.0.22:37084 - 24904 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066509s
	[INFO] 10.244.0.22:41345 - 4398 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000096939s
	[INFO] 10.244.0.22:36645 - 46582 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000055504s
	[INFO] 10.244.0.22:41345 - 16665 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053982s
	[INFO] 10.244.0.22:41345 - 21018 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003495s
	[INFO] 10.244.0.22:41345 - 11135 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031618s
	[INFO] 10.244.0.22:41345 - 47584 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036853s
	[INFO] 10.244.0.22:41345 - 55355 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036281s
	[INFO] 10.244.0.22:41345 - 55029 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032316s
	[INFO] 10.244.0.22:36645 - 57505 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000208573s
	[INFO] 10.244.0.22:36645 - 8747 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069935s
	[INFO] 10.244.0.22:36645 - 33577 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047574s
	[INFO] 10.244.0.22:36645 - 8435 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039148s
	[INFO] 10.244.0.22:36645 - 24482 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035373s
	[INFO] 10.244.0.22:36645 - 55288 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037663s
	
	
	==> describe nodes <==
	Name:               addons-320753
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-320753
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=addons-320753
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T17_42_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-320753
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 17:42:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-320753
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 17:50:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 17:47:40 +0000   Tue, 05 Nov 2024 17:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 17:47:40 +0000   Tue, 05 Nov 2024 17:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 17:47:40 +0000   Tue, 05 Nov 2024 17:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 17:47:40 +0000   Tue, 05 Nov 2024 17:42:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    addons-320753
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 519d04ef668a4324b1894f66ef22ec87
	  System UUID:                519d04ef-668a-4324-b189-4f66ef22ec87
	  Boot ID:                    84d65ca6-e314-4af0-a328-03b507c1d577
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  default                     hello-world-app-55bf9c44b4-gmrtj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 amd-gpu-device-plugin-h5b9p              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 coredns-7c65d6cfc9-cttxl                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m47s
	  kube-system                 etcd-addons-320753                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m52s
	  kube-system                 kube-apiserver-addons-320753             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 kube-controller-manager-addons-320753    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 kube-proxy-24n9l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	  kube-system                 kube-scheduler-addons-320753             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 metrics-server-84c5f94fbc-khd9b          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m42s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m45s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m58s (x8 over 7m58s)  kubelet          Node addons-320753 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m58s (x8 over 7m58s)  kubelet          Node addons-320753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m58s (x7 over 7m58s)  kubelet          Node addons-320753 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m52s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m52s                  kubelet          Node addons-320753 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m52s                  kubelet          Node addons-320753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m52s                  kubelet          Node addons-320753 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m51s                  kubelet          Node addons-320753 status is now: NodeReady
	  Normal  RegisteredNode           7m48s                  node-controller  Node addons-320753 event: Registered Node addons-320753 in Controller
	  Normal  CIDRAssignmentFailed     7m48s                  cidrAllocator    Node addons-320753 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +5.342454] systemd-fstab-generator[1332]: Ignoring "noauto" option for root device
	[  +0.147995] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.103080] kauditd_printk_skb: 139 callbacks suppressed
	[  +5.032916] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.657212] kauditd_printk_skb: 71 callbacks suppressed
	[Nov 5 17:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.478901] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.310188] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.272035] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.868464] kauditd_printk_skb: 53 callbacks suppressed
	[  +7.874820] kauditd_printk_skb: 45 callbacks suppressed
	[Nov 5 17:44] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.737351] kauditd_printk_skb: 14 callbacks suppressed
	[ +23.480672] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.341328] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.004968] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.016302] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 5 17:45] kauditd_printk_skb: 61 callbacks suppressed
	[  +7.425473] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.630895] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.317463] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.784176] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.868482] kauditd_printk_skb: 7 callbacks suppressed
	[Nov 5 17:47] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.077603] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [a8df3d059249156bad67704cc7dd20dce767205d93e153a4008f55bd62bd6d3c] <==
	{"level":"info","ts":"2024-11-05T17:43:57.203556Z","caller":"traceutil/trace.go:171","msg":"trace[1068852426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1123; }","duration":"245.619405ms","start":"2024-11-05T17:43:56.957931Z","end":"2024-11-05T17:43:57.203551Z","steps":["trace[1068852426] 'agreement among raft nodes before linearized reading'  (duration: 245.581582ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:43:57.203744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.022987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2024-11-05T17:43:57.203776Z","caller":"traceutil/trace.go:171","msg":"trace[2043905244] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1123; }","duration":"166.055948ms","start":"2024-11-05T17:43:57.037714Z","end":"2024-11-05T17:43:57.203770Z","steps":["trace[2043905244] 'agreement among raft nodes before linearized reading'  (duration: 165.947529ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:44:00.372026Z","caller":"traceutil/trace.go:171","msg":"trace[118910243] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"450.741229ms","start":"2024-11-05T17:43:59.921218Z","end":"2024-11-05T17:44:00.371959Z","steps":["trace[118910243] 'process raft request'  (duration: 450.628192ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:44:00.372172Z","caller":"traceutil/trace.go:171","msg":"trace[1757073185] linearizableReadLoop","detail":"{readStateIndex:1170; appliedIndex:1170; }","duration":"152.50069ms","start":"2024-11-05T17:44:00.219520Z","end":"2024-11-05T17:44:00.372021Z","steps":["trace[1757073185] 'read index received'  (duration: 152.491158ms)","trace[1757073185] 'applied index is now lower than readState.Index'  (duration: 8.28µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T17:44:00.372327Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.800471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:44:00.372394Z","caller":"traceutil/trace.go:171","msg":"trace[897402176] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1139; }","duration":"152.883215ms","start":"2024-11-05T17:44:00.219502Z","end":"2024-11-05T17:44:00.372386Z","steps":["trace[897402176] 'agreement among raft nodes before linearized reading'  (duration: 152.742893ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:44:00.372365Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T17:43:59.921200Z","time spent":"451.00298ms","remote":"127.0.0.1:41134","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1127 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-11-05T17:44:00.376411Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.71625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:44:00.376499Z","caller":"traceutil/trace.go:171","msg":"trace[526389934] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"114.813676ms","start":"2024-11-05T17:44:00.261677Z","end":"2024-11-05T17:44:00.376490Z","steps":["trace[526389934] 'agreement among raft nodes before linearized reading'  (duration: 114.634909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:44:00.377127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.970584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:44:00.377214Z","caller":"traceutil/trace.go:171","msg":"trace[553349832] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1140; }","duration":"105.060143ms","start":"2024-11-05T17:44:00.272142Z","end":"2024-11-05T17:44:00.377202Z","steps":["trace[553349832] 'agreement among raft nodes before linearized reading'  (duration: 104.950122ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:44:43.941424Z","caller":"traceutil/trace.go:171","msg":"trace[295256943] transaction","detail":"{read_only:false; response_revision:1349; number_of_response:1; }","duration":"273.369648ms","start":"2024-11-05T17:44:43.667986Z","end":"2024-11-05T17:44:43.941356Z","steps":["trace[295256943] 'process raft request'  (duration: 273.143357ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:45:14.558583Z","caller":"traceutil/trace.go:171","msg":"trace[1926740521] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"149.725581ms","start":"2024-11-05T17:45:14.408822Z","end":"2024-11-05T17:45:14.558548Z","steps":["trace[1926740521] 'process raft request'  (duration: 149.529941ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:45:22.641510Z","caller":"traceutil/trace.go:171","msg":"trace[146489829] linearizableReadLoop","detail":"{readStateIndex:1700; appliedIndex:1699; }","duration":"217.533791ms","start":"2024-11-05T17:45:22.423962Z","end":"2024-11-05T17:45:22.641496Z","steps":["trace[146489829] 'read index received'  (duration: 217.430655ms)","trace[146489829] 'applied index is now lower than readState.Index'  (duration: 102.657µs)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:45:22.641601Z","caller":"traceutil/trace.go:171","msg":"trace[2035047528] transaction","detail":"{read_only:false; response_revision:1643; number_of_response:1; }","duration":"229.008433ms","start":"2024-11-05T17:45:22.412587Z","end":"2024-11-05T17:45:22.641595Z","steps":["trace[2035047528] 'process raft request'  (duration: 228.799372ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:45:22.641813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.788954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-11-05T17:45:22.641836Z","caller":"traceutil/trace.go:171","msg":"trace[797036277] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1643; }","duration":"217.871626ms","start":"2024-11-05T17:45:22.423959Z","end":"2024-11-05T17:45:22.641830Z","steps":["trace[797036277] 'agreement among raft nodes before linearized reading'  (duration: 217.721677ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:45:22.641886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.180323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:45:22.641919Z","caller":"traceutil/trace.go:171","msg":"trace[2145014752] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1643; }","duration":"172.231623ms","start":"2024-11-05T17:45:22.469679Z","end":"2024-11-05T17:45:22.641911Z","steps":["trace[2145014752] 'agreement among raft nodes before linearized reading'  (duration: 172.16511ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:45:22.642085Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.697881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/pvc-dc83c679-ddcc-4681-bf85-ba96348fe5e0\" ","response":"range_response_count:1 size:1262"}
	{"level":"info","ts":"2024-11-05T17:45:22.642109Z","caller":"traceutil/trace.go:171","msg":"trace[924296525] range","detail":"{range_begin:/registry/persistentvolumes/pvc-dc83c679-ddcc-4681-bf85-ba96348fe5e0; range_end:; response_count:1; response_revision:1643; }","duration":"100.773533ms","start":"2024-11-05T17:45:22.541328Z","end":"2024-11-05T17:45:22.642102Z","steps":["trace[924296525] 'agreement among raft nodes before linearized reading'  (duration: 100.662206ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:45:48.274296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.405991ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10689173857718937377 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/csi-snapshotter-role\" mod_revision:779 > success:<request_delete_range:<key:\"/registry/clusterrolebindings/csi-snapshotter-role\" > > failure:<request_range:<key:\"/registry/clusterrolebindings/csi-snapshotter-role\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-11-05T17:45:48.274371Z","caller":"traceutil/trace.go:171","msg":"trace[1908162290] linearizableReadLoop","detail":"{readStateIndex:1873; appliedIndex:1872; }","duration":"213.986536ms","start":"2024-11-05T17:45:48.060376Z","end":"2024-11-05T17:45:48.274362Z","steps":["trace[1908162290] 'read index received'  (duration: 9.156139ms)","trace[1908162290] 'applied index is now lower than readState.Index'  (duration: 204.829491ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:45:48.274430Z","caller":"traceutil/trace.go:171","msg":"trace[1992322215] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1808; }","duration":"276.892309ms","start":"2024-11-05T17:45:47.997532Z","end":"2024-11-05T17:45:48.274425Z","steps":["trace[1992322215] 'process raft request'  (duration: 72.042473ms)","trace[1992322215] 'compare'  (duration: 203.964119ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:50:25 up 8 min,  0 users,  load average: 0.17, 0.78, 0.61
	Linux addons-320753 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cadd2623fa5245c421015c9e1411ea025747aebf7b85d4096a4f25cd8bda290a] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1105 17:44:34.193797       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.36.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.36.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.36.76:443: connect: connection refused" logger="UnhandledError"
	I1105 17:44:34.217751       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1105 17:44:39.081724       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.14.226"}
	I1105 17:45:02.324546       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1105 17:45:03.461159       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1105 17:45:07.955429       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1105 17:45:08.148611       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.52.203"}
	E1105 17:45:14.690739       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1105 17:45:30.537276       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1105 17:45:47.012417       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.012521       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:45:47.027835       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.027893       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:45:47.077695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.078304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:45:47.147537       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.147590       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:45:47.155716       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:45:47.155765       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1105 17:45:48.148573       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1105 17:45:48.156505       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1105 17:45:48.163105       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1105 17:47:28.210389       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.162.23"}
	
	
	==> kube-controller-manager [9d1b0135e5cf414fb8a3461ca0b5503878235ee3ec17aef58adf381fe1af14b8] <==
	E1105 17:47:51.673448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:48:26.789579       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:48:26.789909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:48:30.915800       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:48:30.915925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:48:31.702872       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:48:31.702929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:48:33.599561       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:48:33.599710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:49:16.995268       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:49:16.995430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:49:17.400812       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:49:17.400947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:49:23.805443       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:49:23.805499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:49:26.702624       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:49:26.702734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:49:58.196668       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:49:58.196787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:49:59.088374       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:49:59.088426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:50:06.254942       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:50:06.255005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:50:15.197419       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:50:15.197565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [feda2f7d89c6255b286785586987c7cd681689d3a3fd976f599ebc5569097346] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 17:42:40.585927       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 17:42:40.622102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.201"]
	E1105 17:42:40.622182       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 17:42:40.729298       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 17:42:40.729328       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 17:42:40.729362       1 server_linux.go:169] "Using iptables Proxier"
	I1105 17:42:40.732599       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 17:42:40.732874       1 server.go:483] "Version info" version="v1.31.2"
	I1105 17:42:40.732888       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 17:42:40.734817       1 config.go:199] "Starting service config controller"
	I1105 17:42:40.734846       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 17:42:40.734863       1 config.go:105] "Starting endpoint slice config controller"
	I1105 17:42:40.734867       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 17:42:40.735285       1 config.go:328] "Starting node config controller"
	I1105 17:42:40.735309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 17:42:40.835726       1 shared_informer.go:320] Caches are synced for node config
	I1105 17:42:40.835755       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 17:42:40.835762       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [467f16dcbd4a797a1ef27dc71b2725cef0e3de49915c67a4d2b6f0d235b64f7d] <==
	W1105 17:42:30.864854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 17:42:30.865074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.716092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 17:42:31.716194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.741307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 17:42:31.741350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.821771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 17:42:31.821861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.875213       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 17:42:31.875309       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1105 17:42:31.898655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 17:42:31.898779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.918456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1105 17:42:31.918824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.962822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1105 17:42:31.962870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:31.967187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 17:42:31.967265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:32.099391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 17:42:32.099444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:32.177329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 17:42:32.177377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:42:32.177465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1105 17:42:32.177493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1105 17:42:33.856885       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 17:49:03 addons-320753 kubelet[1204]: E1105 17:49:03.840210    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828943839783248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:03 addons-320753 kubelet[1204]: E1105 17:49:03.840487    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828943839783248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:13 addons-320753 kubelet[1204]: E1105 17:49:13.842728    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828953842377844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:13 addons-320753 kubelet[1204]: E1105 17:49:13.842763    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828953842377844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:21 addons-320753 kubelet[1204]: I1105 17:49:21.429276    1204 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 05 17:49:23 addons-320753 kubelet[1204]: E1105 17:49:23.844981    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828963844540458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:23 addons-320753 kubelet[1204]: E1105 17:49:23.845406    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828963844540458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:25 addons-320753 kubelet[1204]: I1105 17:49:25.427315    1204 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h5b9p" secret="" err="secret \"gcp-auth\" not found"
	Nov 05 17:49:33 addons-320753 kubelet[1204]: E1105 17:49:33.457643    1204 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 17:49:33 addons-320753 kubelet[1204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 17:49:33 addons-320753 kubelet[1204]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 17:49:33 addons-320753 kubelet[1204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 17:49:33 addons-320753 kubelet[1204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 17:49:33 addons-320753 kubelet[1204]: E1105 17:49:33.847994    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828973847637526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:33 addons-320753 kubelet[1204]: E1105 17:49:33.848024    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828973847637526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:43 addons-320753 kubelet[1204]: E1105 17:49:43.850710    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828983850232022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:43 addons-320753 kubelet[1204]: E1105 17:49:43.851002    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828983850232022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:53 addons-320753 kubelet[1204]: E1105 17:49:53.854022    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828993853660829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:53 addons-320753 kubelet[1204]: E1105 17:49:53.854099    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828993853660829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:03 addons-320753 kubelet[1204]: E1105 17:50:03.857103    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829003856530989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:03 addons-320753 kubelet[1204]: E1105 17:50:03.857142    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829003856530989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:13 addons-320753 kubelet[1204]: E1105 17:50:13.860274    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829013859610639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:13 addons-320753 kubelet[1204]: E1105 17:50:13.860587    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829013859610639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:23 addons-320753 kubelet[1204]: E1105 17:50:23.867547    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829023865989759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:23 addons-320753 kubelet[1204]: E1105 17:50:23.867636    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829023865989759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603351,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c7bda9b1ee1f520a339c1c38c4190b89e3d54fda6da4f6bab3f97307652093ee] <==
	I1105 17:42:44.546376       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 17:42:44.632589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 17:42:44.632654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 17:42:44.690370       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 17:42:44.690525       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-320753_00115da2-6d14-4553-8f96-a127f1403bf1!
	I1105 17:42:44.690591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66faf5da-69be-4e5b-a7e0-be6255ac4b49", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-320753_00115da2-6d14-4553-8f96-a127f1403bf1 became leader
	I1105 17:42:44.896190       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-320753_00115da2-6d14-4553-8f96-a127f1403bf1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-320753 -n addons-320753
helpers_test.go:261: (dbg) Run:  kubectl --context addons-320753 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (331.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-320753
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-320753: exit status 82 (2m0.462112485s)

                                                
                                                
-- stdout --
	* Stopping node "addons-320753"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-320753" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-320753
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-320753: exit status 11 (21.52245501s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-320753" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-320753
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-320753: exit status 11 (6.143908523s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-320753" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-320753
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-320753: exit status 11 (6.143211358s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-320753" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 node stop m02 -v=7 --alsologtostderr
E1105 18:08:12.396249   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:08:53.358376   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:09:06.921354   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844661 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.463366131s)

                                                
                                                
-- stdout --
	* Stopping node "ha-844661-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:07:53.594018   31178 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:07:53.594177   31178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:07:53.594189   31178 out.go:358] Setting ErrFile to fd 2...
	I1105 18:07:53.594195   31178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:07:53.594439   31178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:07:53.594696   31178 mustload.go:65] Loading cluster: ha-844661
	I1105 18:07:53.595229   31178 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:07:53.595254   31178 stop.go:39] StopHost: ha-844661-m02
	I1105 18:07:53.595652   31178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:07:53.595699   31178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:07:53.611325   31178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I1105 18:07:53.611820   31178 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:07:53.612333   31178 main.go:141] libmachine: Using API Version  1
	I1105 18:07:53.612353   31178 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:07:53.612699   31178 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:07:53.615161   31178 out.go:177] * Stopping node "ha-844661-m02"  ...
	I1105 18:07:53.616451   31178 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1105 18:07:53.616487   31178 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:07:53.616703   31178 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1105 18:07:53.616732   31178 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:07:53.619695   31178 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:07:53.620114   31178 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:07:53.620138   31178 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:07:53.620438   31178 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:07:53.620606   31178 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:07:53.620756   31178 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:07:53.620903   31178 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:07:53.706193   31178 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1105 18:07:53.759785   31178 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1105 18:07:53.813712   31178 main.go:141] libmachine: Stopping "ha-844661-m02"...
	I1105 18:07:53.813748   31178 main.go:141] libmachine: (ha-844661-m02) Calling .GetState
	I1105 18:07:53.815253   31178 main.go:141] libmachine: (ha-844661-m02) Calling .Stop
	I1105 18:07:53.818747   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 0/120
	I1105 18:07:54.820327   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 1/120
	I1105 18:07:55.821626   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 2/120
	I1105 18:07:56.822698   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 3/120
	I1105 18:07:57.824841   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 4/120
	I1105 18:07:58.826859   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 5/120
	I1105 18:07:59.828855   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 6/120
	I1105 18:08:00.830191   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 7/120
	I1105 18:08:01.831422   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 8/120
	I1105 18:08:02.833483   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 9/120
	I1105 18:08:03.835494   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 10/120
	I1105 18:08:04.836732   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 11/120
	I1105 18:08:05.837983   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 12/120
	I1105 18:08:06.839304   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 13/120
	I1105 18:08:07.840854   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 14/120
	I1105 18:08:08.842678   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 15/120
	I1105 18:08:09.844012   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 16/120
	I1105 18:08:10.846338   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 17/120
	I1105 18:08:11.847889   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 18/120
	I1105 18:08:12.849314   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 19/120
	I1105 18:08:13.851301   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 20/120
	I1105 18:08:14.853625   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 21/120
	I1105 18:08:15.854954   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 22/120
	I1105 18:08:16.856278   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 23/120
	I1105 18:08:17.857489   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 24/120
	I1105 18:08:18.859516   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 25/120
	I1105 18:08:19.861311   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 26/120
	I1105 18:08:20.862572   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 27/120
	I1105 18:08:21.864118   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 28/120
	I1105 18:08:22.865693   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 29/120
	I1105 18:08:23.867780   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 30/120
	I1105 18:08:24.869503   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 31/120
	I1105 18:08:25.871318   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 32/120
	I1105 18:08:26.873293   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 33/120
	I1105 18:08:27.874605   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 34/120
	I1105 18:08:28.877071   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 35/120
	I1105 18:08:29.878520   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 36/120
	I1105 18:08:30.879824   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 37/120
	I1105 18:08:31.881459   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 38/120
	I1105 18:08:32.882765   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 39/120
	I1105 18:08:33.884968   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 40/120
	I1105 18:08:34.886321   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 41/120
	I1105 18:08:35.887590   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 42/120
	I1105 18:08:36.889481   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 43/120
	I1105 18:08:37.890825   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 44/120
	I1105 18:08:38.893184   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 45/120
	I1105 18:08:39.894424   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 46/120
	I1105 18:08:40.895865   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 47/120
	I1105 18:08:41.897229   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 48/120
	I1105 18:08:42.899516   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 49/120
	I1105 18:08:43.901720   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 50/120
	I1105 18:08:44.903041   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 51/120
	I1105 18:08:45.904542   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 52/120
	I1105 18:08:46.905981   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 53/120
	I1105 18:08:47.907565   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 54/120
	I1105 18:08:48.909341   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 55/120
	I1105 18:08:49.910528   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 56/120
	I1105 18:08:50.911684   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 57/120
	I1105 18:08:51.913471   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 58/120
	I1105 18:08:52.914658   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 59/120
	I1105 18:08:53.916848   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 60/120
	I1105 18:08:54.918171   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 61/120
	I1105 18:08:55.919533   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 62/120
	I1105 18:08:56.921103   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 63/120
	I1105 18:08:57.922318   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 64/120
	I1105 18:08:58.923704   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 65/120
	I1105 18:08:59.925214   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 66/120
	I1105 18:09:00.926883   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 67/120
	I1105 18:09:01.928283   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 68/120
	I1105 18:09:02.929508   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 69/120
	I1105 18:09:03.931496   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 70/120
	I1105 18:09:04.933546   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 71/120
	I1105 18:09:05.934784   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 72/120
	I1105 18:09:06.936096   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 73/120
	I1105 18:09:07.937611   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 74/120
	I1105 18:09:08.939418   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 75/120
	I1105 18:09:09.941386   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 76/120
	I1105 18:09:10.943505   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 77/120
	I1105 18:09:11.945593   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 78/120
	I1105 18:09:12.947726   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 79/120
	I1105 18:09:13.949469   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 80/120
	I1105 18:09:14.950763   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 81/120
	I1105 18:09:15.951980   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 82/120
	I1105 18:09:16.953344   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 83/120
	I1105 18:09:17.954797   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 84/120
	I1105 18:09:18.956652   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 85/120
	I1105 18:09:19.957886   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 86/120
	I1105 18:09:20.959131   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 87/120
	I1105 18:09:21.961442   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 88/120
	I1105 18:09:22.962752   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 89/120
	I1105 18:09:23.964671   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 90/120
	I1105 18:09:24.966005   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 91/120
	I1105 18:09:25.967992   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 92/120
	I1105 18:09:26.969323   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 93/120
	I1105 18:09:27.970922   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 94/120
	I1105 18:09:28.972864   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 95/120
	I1105 18:09:29.974637   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 96/120
	I1105 18:09:30.975956   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 97/120
	I1105 18:09:31.977567   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 98/120
	I1105 18:09:32.978642   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 99/120
	I1105 18:09:33.981010   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 100/120
	I1105 18:09:34.982135   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 101/120
	I1105 18:09:35.983439   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 102/120
	I1105 18:09:36.985320   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 103/120
	I1105 18:09:37.986666   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 104/120
	I1105 18:09:38.988563   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 105/120
	I1105 18:09:39.990383   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 106/120
	I1105 18:09:40.991868   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 107/120
	I1105 18:09:41.993439   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 108/120
	I1105 18:09:42.994730   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 109/120
	I1105 18:09:43.996764   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 110/120
	I1105 18:09:44.997993   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 111/120
	I1105 18:09:45.999976   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 112/120
	I1105 18:09:47.001293   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 113/120
	I1105 18:09:48.003557   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 114/120
	I1105 18:09:49.005489   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 115/120
	I1105 18:09:50.006818   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 116/120
	I1105 18:09:51.008799   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 117/120
	I1105 18:09:52.010180   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 118/120
	I1105 18:09:53.012077   31178 main.go:141] libmachine: (ha-844661-m02) Waiting for machine to stop 119/120
	I1105 18:09:54.013034   31178 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1105 18:09:54.013173   31178 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-844661 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr: (18.677921666s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844661 -n ha-844661
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 logs -n 25: (1.409986068s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m03_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m04 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp testdata/cp-test.txt                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m04_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03:/home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m03 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-844661 node stop m02 -v=7                                                     | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:03:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:03:20.652608   27131 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:03:20.652749   27131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:03:20.652760   27131 out.go:358] Setting ErrFile to fd 2...
	I1105 18:03:20.652767   27131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:03:20.652948   27131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:03:20.653500   27131 out.go:352] Setting JSON to false
	I1105 18:03:20.654349   27131 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2743,"bootTime":1730827058,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:03:20.654437   27131 start.go:139] virtualization: kvm guest
	I1105 18:03:20.656534   27131 out.go:177] * [ha-844661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:03:20.657972   27131 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:03:20.658005   27131 notify.go:220] Checking for updates...
	I1105 18:03:20.660463   27131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:03:20.661864   27131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:03:20.663111   27131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:20.664367   27131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:03:20.665603   27131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:03:20.666934   27131 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:03:20.701089   27131 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 18:03:20.702358   27131 start.go:297] selected driver: kvm2
	I1105 18:03:20.702375   27131 start.go:901] validating driver "kvm2" against <nil>
	I1105 18:03:20.702385   27131 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:03:20.703116   27131 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:03:20.703189   27131 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:03:20.718290   27131 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:03:20.718330   27131 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 18:03:20.718556   27131 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:03:20.718584   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:20.718622   27131 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1105 18:03:20.718632   27131 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 18:03:20.718676   27131 start.go:340] cluster config:
	{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1105 18:03:20.718795   27131 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:03:20.720599   27131 out.go:177] * Starting "ha-844661" primary control-plane node in "ha-844661" cluster
	I1105 18:03:20.721815   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:03:20.721849   27131 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:03:20.721872   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:03:20.721982   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:03:20.721996   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:03:20.722409   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:03:20.722435   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json: {Name:mkaefcdd76905e10868a2bf21132faf3026da59d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:20.722574   27131 start.go:360] acquireMachinesLock for ha-844661: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:03:20.722613   27131 start.go:364] duration metric: took 21.652µs to acquireMachinesLock for "ha-844661"
	I1105 18:03:20.722627   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:03:20.722690   27131 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 18:03:20.724172   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:03:20.724279   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:03:20.724320   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:03:20.738289   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I1105 18:03:20.738756   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:03:20.739283   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:03:20.739302   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:03:20.739702   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:03:20.739881   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:20.740007   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:20.740175   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:03:20.740205   27131 client.go:168] LocalClient.Create starting
	I1105 18:03:20.740238   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:03:20.740272   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:03:20.740288   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:03:20.740341   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:03:20.740359   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:03:20.740374   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:03:20.740388   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:03:20.740400   27131 main.go:141] libmachine: (ha-844661) Calling .PreCreateCheck
	I1105 18:03:20.740713   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:20.741068   27131 main.go:141] libmachine: Creating machine...
	I1105 18:03:20.741080   27131 main.go:141] libmachine: (ha-844661) Calling .Create
	I1105 18:03:20.741210   27131 main.go:141] libmachine: (ha-844661) Creating KVM machine...
	I1105 18:03:20.742313   27131 main.go:141] libmachine: (ha-844661) DBG | found existing default KVM network
	I1105 18:03:20.742933   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:20.742806   27154 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1105 18:03:20.742963   27131 main.go:141] libmachine: (ha-844661) DBG | created network xml: 
	I1105 18:03:20.742994   27131 main.go:141] libmachine: (ha-844661) DBG | <network>
	I1105 18:03:20.743008   27131 main.go:141] libmachine: (ha-844661) DBG |   <name>mk-ha-844661</name>
	I1105 18:03:20.743015   27131 main.go:141] libmachine: (ha-844661) DBG |   <dns enable='no'/>
	I1105 18:03:20.743024   27131 main.go:141] libmachine: (ha-844661) DBG |   
	I1105 18:03:20.743029   27131 main.go:141] libmachine: (ha-844661) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1105 18:03:20.743036   27131 main.go:141] libmachine: (ha-844661) DBG |     <dhcp>
	I1105 18:03:20.743041   27131 main.go:141] libmachine: (ha-844661) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1105 18:03:20.743049   27131 main.go:141] libmachine: (ha-844661) DBG |     </dhcp>
	I1105 18:03:20.743053   27131 main.go:141] libmachine: (ha-844661) DBG |   </ip>
	I1105 18:03:20.743060   27131 main.go:141] libmachine: (ha-844661) DBG |   
	I1105 18:03:20.743066   27131 main.go:141] libmachine: (ha-844661) DBG | </network>
	I1105 18:03:20.743074   27131 main.go:141] libmachine: (ha-844661) DBG | 
	I1105 18:03:20.748364   27131 main.go:141] libmachine: (ha-844661) DBG | trying to create private KVM network mk-ha-844661 192.168.39.0/24...
	I1105 18:03:20.811114   27131 main.go:141] libmachine: (ha-844661) DBG | private KVM network mk-ha-844661 192.168.39.0/24 created
	I1105 18:03:20.811141   27131 main.go:141] libmachine: (ha-844661) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 ...
	I1105 18:03:20.811159   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:20.811087   27154 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:20.811177   27131 main.go:141] libmachine: (ha-844661) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:03:20.811237   27131 main.go:141] libmachine: (ha-844661) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:03:21.057798   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.057650   27154 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa...
	I1105 18:03:21.226724   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.226590   27154 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/ha-844661.rawdisk...
	I1105 18:03:21.226750   27131 main.go:141] libmachine: (ha-844661) DBG | Writing magic tar header
	I1105 18:03:21.226760   27131 main.go:141] libmachine: (ha-844661) DBG | Writing SSH key tar header
	I1105 18:03:21.226768   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.226707   27154 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 ...
	I1105 18:03:21.226781   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661
	I1105 18:03:21.226859   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 (perms=drwx------)
	I1105 18:03:21.226880   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:03:21.226887   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:03:21.226897   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:21.226904   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:03:21.226909   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:03:21.226916   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:03:21.226920   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:03:21.226927   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home
	I1105 18:03:21.226932   27131 main.go:141] libmachine: (ha-844661) DBG | Skipping /home - not owner
	I1105 18:03:21.226941   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:03:21.226950   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:03:21.226957   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:03:21.226962   27131 main.go:141] libmachine: (ha-844661) Creating domain...
	I1105 18:03:21.228177   27131 main.go:141] libmachine: (ha-844661) define libvirt domain using xml: 
	I1105 18:03:21.228198   27131 main.go:141] libmachine: (ha-844661) <domain type='kvm'>
	I1105 18:03:21.228204   27131 main.go:141] libmachine: (ha-844661)   <name>ha-844661</name>
	I1105 18:03:21.228209   27131 main.go:141] libmachine: (ha-844661)   <memory unit='MiB'>2200</memory>
	I1105 18:03:21.228214   27131 main.go:141] libmachine: (ha-844661)   <vcpu>2</vcpu>
	I1105 18:03:21.228218   27131 main.go:141] libmachine: (ha-844661)   <features>
	I1105 18:03:21.228223   27131 main.go:141] libmachine: (ha-844661)     <acpi/>
	I1105 18:03:21.228228   27131 main.go:141] libmachine: (ha-844661)     <apic/>
	I1105 18:03:21.228233   27131 main.go:141] libmachine: (ha-844661)     <pae/>
	I1105 18:03:21.228241   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228249   27131 main.go:141] libmachine: (ha-844661)   </features>
	I1105 18:03:21.228254   27131 main.go:141] libmachine: (ha-844661)   <cpu mode='host-passthrough'>
	I1105 18:03:21.228261   27131 main.go:141] libmachine: (ha-844661)   
	I1105 18:03:21.228268   27131 main.go:141] libmachine: (ha-844661)   </cpu>
	I1105 18:03:21.228298   27131 main.go:141] libmachine: (ha-844661)   <os>
	I1105 18:03:21.228318   27131 main.go:141] libmachine: (ha-844661)     <type>hvm</type>
	I1105 18:03:21.228325   27131 main.go:141] libmachine: (ha-844661)     <boot dev='cdrom'/>
	I1105 18:03:21.228329   27131 main.go:141] libmachine: (ha-844661)     <boot dev='hd'/>
	I1105 18:03:21.228355   27131 main.go:141] libmachine: (ha-844661)     <bootmenu enable='no'/>
	I1105 18:03:21.228375   27131 main.go:141] libmachine: (ha-844661)   </os>
	I1105 18:03:21.228385   27131 main.go:141] libmachine: (ha-844661)   <devices>
	I1105 18:03:21.228403   27131 main.go:141] libmachine: (ha-844661)     <disk type='file' device='cdrom'>
	I1105 18:03:21.228418   27131 main.go:141] libmachine: (ha-844661)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/boot2docker.iso'/>
	I1105 18:03:21.228429   27131 main.go:141] libmachine: (ha-844661)       <target dev='hdc' bus='scsi'/>
	I1105 18:03:21.228437   27131 main.go:141] libmachine: (ha-844661)       <readonly/>
	I1105 18:03:21.228450   27131 main.go:141] libmachine: (ha-844661)     </disk>
	I1105 18:03:21.228462   27131 main.go:141] libmachine: (ha-844661)     <disk type='file' device='disk'>
	I1105 18:03:21.228474   27131 main.go:141] libmachine: (ha-844661)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:03:21.228488   27131 main.go:141] libmachine: (ha-844661)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/ha-844661.rawdisk'/>
	I1105 18:03:21.228497   27131 main.go:141] libmachine: (ha-844661)       <target dev='hda' bus='virtio'/>
	I1105 18:03:21.228502   27131 main.go:141] libmachine: (ha-844661)     </disk>
	I1105 18:03:21.228511   27131 main.go:141] libmachine: (ha-844661)     <interface type='network'>
	I1105 18:03:21.228519   27131 main.go:141] libmachine: (ha-844661)       <source network='mk-ha-844661'/>
	I1105 18:03:21.228532   27131 main.go:141] libmachine: (ha-844661)       <model type='virtio'/>
	I1105 18:03:21.228539   27131 main.go:141] libmachine: (ha-844661)     </interface>
	I1105 18:03:21.228551   27131 main.go:141] libmachine: (ha-844661)     <interface type='network'>
	I1105 18:03:21.228560   27131 main.go:141] libmachine: (ha-844661)       <source network='default'/>
	I1105 18:03:21.228570   27131 main.go:141] libmachine: (ha-844661)       <model type='virtio'/>
	I1105 18:03:21.228579   27131 main.go:141] libmachine: (ha-844661)     </interface>
	I1105 18:03:21.228587   27131 main.go:141] libmachine: (ha-844661)     <serial type='pty'>
	I1105 18:03:21.228599   27131 main.go:141] libmachine: (ha-844661)       <target port='0'/>
	I1105 18:03:21.228607   27131 main.go:141] libmachine: (ha-844661)     </serial>
	I1105 18:03:21.228613   27131 main.go:141] libmachine: (ha-844661)     <console type='pty'>
	I1105 18:03:21.228629   27131 main.go:141] libmachine: (ha-844661)       <target type='serial' port='0'/>
	I1105 18:03:21.228642   27131 main.go:141] libmachine: (ha-844661)     </console>
	I1105 18:03:21.228653   27131 main.go:141] libmachine: (ha-844661)     <rng model='virtio'>
	I1105 18:03:21.228670   27131 main.go:141] libmachine: (ha-844661)       <backend model='random'>/dev/random</backend>
	I1105 18:03:21.228679   27131 main.go:141] libmachine: (ha-844661)     </rng>
	I1105 18:03:21.228687   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228694   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228699   27131 main.go:141] libmachine: (ha-844661)   </devices>
	I1105 18:03:21.228707   27131 main.go:141] libmachine: (ha-844661) </domain>
	I1105 18:03:21.228717   27131 main.go:141] libmachine: (ha-844661) 
	I1105 18:03:21.232718   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:b2:92:26 in network default
	I1105 18:03:21.233193   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:21.233215   27131 main.go:141] libmachine: (ha-844661) Ensuring networks are active...
	I1105 18:03:21.233765   27131 main.go:141] libmachine: (ha-844661) Ensuring network default is active
	I1105 18:03:21.234017   27131 main.go:141] libmachine: (ha-844661) Ensuring network mk-ha-844661 is active
	I1105 18:03:21.234455   27131 main.go:141] libmachine: (ha-844661) Getting domain xml...
	I1105 18:03:21.235089   27131 main.go:141] libmachine: (ha-844661) Creating domain...
	I1105 18:03:22.412574   27131 main.go:141] libmachine: (ha-844661) Waiting to get IP...
	I1105 18:03:22.413266   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:22.413608   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:22.413630   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:22.413577   27154 retry.go:31] will retry after 279.954438ms: waiting for machine to come up
	I1105 18:03:22.695059   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:22.695483   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:22.695511   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:22.695451   27154 retry.go:31] will retry after 304.898477ms: waiting for machine to come up
	I1105 18:03:23.001972   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.002322   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.002343   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.002303   27154 retry.go:31] will retry after 443.493793ms: waiting for machine to come up
	I1105 18:03:23.446683   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.447042   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.447069   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.446999   27154 retry.go:31] will retry after 509.391538ms: waiting for machine to come up
	I1105 18:03:23.957539   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.957900   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.957927   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.957847   27154 retry.go:31] will retry after 602.880889ms: waiting for machine to come up
	I1105 18:03:24.562659   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:24.563119   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:24.563144   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:24.563076   27154 retry.go:31] will retry after 741.734368ms: waiting for machine to come up
	I1105 18:03:25.306116   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:25.306633   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:25.306663   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:25.306587   27154 retry.go:31] will retry after 1.015957471s: waiting for machine to come up
	I1105 18:03:26.324342   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:26.324731   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:26.324755   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:26.324683   27154 retry.go:31] will retry after 1.378698886s: waiting for machine to come up
	I1105 18:03:27.705172   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:27.705551   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:27.705575   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:27.705506   27154 retry.go:31] will retry after 1.576136067s: waiting for machine to come up
	I1105 18:03:29.283960   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:29.284380   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:29.284417   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:29.284337   27154 retry.go:31] will retry after 2.253581174s: waiting for machine to come up
	I1105 18:03:31.539436   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:31.539830   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:31.539860   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:31.539773   27154 retry.go:31] will retry after 1.761371484s: waiting for machine to come up
	I1105 18:03:33.303719   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:33.304166   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:33.304190   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:33.304128   27154 retry.go:31] will retry after 2.85080226s: waiting for machine to come up
	I1105 18:03:36.156486   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:36.156898   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:36.156920   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:36.156851   27154 retry.go:31] will retry after 4.320693691s: waiting for machine to come up
	I1105 18:03:40.482276   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.482645   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has current primary IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.482666   27131 main.go:141] libmachine: (ha-844661) Found IP for machine: 192.168.39.48
	I1105 18:03:40.482731   27131 main.go:141] libmachine: (ha-844661) Reserving static IP address...
	I1105 18:03:40.483186   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find host DHCP lease matching {name: "ha-844661", mac: "52:54:00:ba:57:dd", ip: "192.168.39.48"} in network mk-ha-844661
	I1105 18:03:40.553039   27131 main.go:141] libmachine: (ha-844661) DBG | Getting to WaitForSSH function...
	I1105 18:03:40.553065   27131 main.go:141] libmachine: (ha-844661) Reserved static IP address: 192.168.39.48
	I1105 18:03:40.553074   27131 main.go:141] libmachine: (ha-844661) Waiting for SSH to be available...
	I1105 18:03:40.555541   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.555889   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.555921   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.556076   27131 main.go:141] libmachine: (ha-844661) DBG | Using SSH client type: external
	I1105 18:03:40.556099   27131 main.go:141] libmachine: (ha-844661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa (-rw-------)
	I1105 18:03:40.556130   27131 main.go:141] libmachine: (ha-844661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:03:40.556164   27131 main.go:141] libmachine: (ha-844661) DBG | About to run SSH command:
	I1105 18:03:40.556196   27131 main.go:141] libmachine: (ha-844661) DBG | exit 0
	I1105 18:03:40.678881   27131 main.go:141] libmachine: (ha-844661) DBG | SSH cmd err, output: <nil>: 
	I1105 18:03:40.679168   27131 main.go:141] libmachine: (ha-844661) KVM machine creation complete!
	I1105 18:03:40.679431   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:40.680021   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:40.680197   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:40.680362   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:03:40.680377   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:03:40.681549   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:03:40.681565   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:03:40.681581   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:03:40.681589   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.683878   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.684197   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.684222   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.684354   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.684522   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.684666   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.684789   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.684936   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.685164   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.685176   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:03:40.782106   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:03:40.782126   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:03:40.782134   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.785142   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.785540   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.785569   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.785664   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.785868   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.786031   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.786159   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.786354   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.786515   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.786526   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:03:40.883619   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:03:40.883676   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:03:40.883682   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:03:40.883690   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:40.883923   27131 buildroot.go:166] provisioning hostname "ha-844661"
	I1105 18:03:40.883949   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:40.884120   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.886507   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.886833   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.886857   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.886980   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.887151   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.887291   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.887396   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.887549   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.887741   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.887756   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661 && echo "ha-844661" | sudo tee /etc/hostname
	I1105 18:03:41.000392   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661
	
	I1105 18:03:41.000420   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.003294   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.003567   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.003608   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.003744   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.003933   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.004103   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.004242   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.004353   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.004531   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.004545   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:03:41.111348   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:03:41.111383   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:03:41.111432   27131 buildroot.go:174] setting up certificates
	I1105 18:03:41.111449   27131 provision.go:84] configureAuth start
	I1105 18:03:41.111460   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:41.111736   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.114450   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.114812   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.114841   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.114944   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.117124   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.117436   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.117462   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.117573   27131 provision.go:143] copyHostCerts
	I1105 18:03:41.117613   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:03:41.117655   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:03:41.117671   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:03:41.117771   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:03:41.117875   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:03:41.117903   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:03:41.117913   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:03:41.117953   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:03:41.118004   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:03:41.118021   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:03:41.118027   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:03:41.118050   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:03:41.118095   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661 san=[127.0.0.1 192.168.39.48 ha-844661 localhost minikube]
	I1105 18:03:41.208702   27131 provision.go:177] copyRemoteCerts
	I1105 18:03:41.208760   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:03:41.208783   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.211467   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.211827   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.211850   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.212052   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.212204   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.212341   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.212443   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.296812   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:03:41.296897   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:03:41.319712   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:03:41.319772   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:03:41.342415   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:03:41.342483   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1105 18:03:41.365050   27131 provision.go:87] duration metric: took 253.585291ms to configureAuth
	I1105 18:03:41.365082   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:03:41.365296   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:03:41.365378   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.368515   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.368840   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.368869   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.369025   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.369189   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.369363   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.369489   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.369646   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.369808   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.369821   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:03:41.576635   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:03:41.576666   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:03:41.576676   27131 main.go:141] libmachine: (ha-844661) Calling .GetURL
	I1105 18:03:41.577929   27131 main.go:141] libmachine: (ha-844661) DBG | Using libvirt version 6000000
	I1105 18:03:41.580297   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.580615   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.580654   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.580760   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:03:41.580772   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:03:41.580778   27131 client.go:171] duration metric: took 20.840565211s to LocalClient.Create
	I1105 18:03:41.580795   27131 start.go:167] duration metric: took 20.84062429s to libmachine.API.Create "ha-844661"
	I1105 18:03:41.580805   27131 start.go:293] postStartSetup for "ha-844661" (driver="kvm2")
	I1105 18:03:41.580814   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:03:41.580829   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.581046   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:03:41.581068   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.583124   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.583501   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.583522   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.583601   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.583779   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.583943   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.584110   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.661161   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:03:41.665033   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:03:41.665062   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:03:41.665127   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:03:41.665231   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:03:41.665252   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:03:41.665373   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:03:41.674466   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:03:41.696494   27131 start.go:296] duration metric: took 115.67878ms for postStartSetup
	I1105 18:03:41.696542   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:41.697138   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.699655   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.699984   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.700009   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.700292   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:03:41.700505   27131 start.go:128] duration metric: took 20.977803727s to createHost
	I1105 18:03:41.700531   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.702386   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.702601   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.702627   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.702711   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.702863   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.703005   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.703106   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.703251   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.703451   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.703464   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:03:41.803411   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829821.777547713
	
	I1105 18:03:41.803432   27131 fix.go:216] guest clock: 1730829821.777547713
	I1105 18:03:41.803441   27131 fix.go:229] Guest: 2024-11-05 18:03:41.777547713 +0000 UTC Remote: 2024-11-05 18:03:41.700519186 +0000 UTC m=+21.085212205 (delta=77.028527ms)
	I1105 18:03:41.803466   27131 fix.go:200] guest clock delta is within tolerance: 77.028527ms
	I1105 18:03:41.803472   27131 start.go:83] releasing machines lock for "ha-844661", held for 21.080851922s
	I1105 18:03:41.803504   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.803818   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.806212   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.806544   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.806574   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.806731   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807182   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807323   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807421   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:03:41.807458   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.807478   27131 ssh_runner.go:195] Run: cat /version.json
	I1105 18:03:41.807503   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.809937   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810070   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810265   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.810291   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810383   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.810476   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.810506   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810517   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.810650   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.810655   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.810815   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.810809   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.810922   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.811058   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.883551   27131 ssh_runner.go:195] Run: systemctl --version
	I1105 18:03:41.923044   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:03:42.072766   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:03:42.079007   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:03:42.079076   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:03:42.094820   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:03:42.094844   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:03:42.094917   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:03:42.118583   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:03:42.138115   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:03:42.138172   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:03:42.152440   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:03:42.166344   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:03:42.279937   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:03:42.434792   27131 docker.go:233] disabling docker service ...
	I1105 18:03:42.434953   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:03:42.449109   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:03:42.461551   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:03:42.578145   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:03:42.699091   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:03:42.712758   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:03:42.730751   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:03:42.730837   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.741264   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:03:42.741334   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.751371   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.761461   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.771733   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:03:42.782235   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.792151   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.809625   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.820631   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:03:42.829567   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:03:42.829657   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:03:42.841074   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:03:42.849804   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:03:42.970294   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:03:43.072129   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:03:43.072202   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:03:43.076505   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:03:43.076553   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:03:43.079876   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:03:43.118292   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:03:43.118368   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:03:43.145365   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:03:43.174475   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:03:43.175688   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:43.178118   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:43.178392   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:43.178429   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:43.178616   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:03:43.182299   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:03:43.194156   27131 kubeadm.go:883] updating cluster {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:03:43.194286   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:03:43.194326   27131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:03:43.224139   27131 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 18:03:43.224200   27131 ssh_runner.go:195] Run: which lz4
	I1105 18:03:43.227717   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1105 18:03:43.227803   27131 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 18:03:43.231367   27131 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 18:03:43.231394   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 18:03:44.421241   27131 crio.go:462] duration metric: took 1.193460189s to copy over tarball
	I1105 18:03:44.421309   27131 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 18:03:46.448289   27131 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.026951778s)
	I1105 18:03:46.448321   27131 crio.go:469] duration metric: took 2.027054899s to extract the tarball
	I1105 18:03:46.448331   27131 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 18:03:46.484203   27131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:03:46.526703   27131 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:03:46.526728   27131 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:03:46.526737   27131 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.2 crio true true} ...
	I1105 18:03:46.526839   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:03:46.526923   27131 ssh_runner.go:195] Run: crio config
	I1105 18:03:46.568508   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:46.568526   27131 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 18:03:46.568535   27131 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:03:46.568555   27131 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844661 NodeName:ha-844661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:03:46.568670   27131 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.48"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:03:46.568726   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:03:46.568770   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:03:46.584044   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:03:46.584179   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:03:46.584237   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:03:46.593564   27131 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:03:46.593616   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 18:03:46.602413   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1105 18:03:46.618161   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:03:46.634586   27131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1105 18:03:46.650181   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1105 18:03:46.665377   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:03:46.668925   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:03:46.679986   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:03:46.788039   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:03:46.803466   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.48
	I1105 18:03:46.803487   27131 certs.go:194] generating shared ca certs ...
	I1105 18:03:46.803503   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.803661   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:03:46.803717   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:03:46.803731   27131 certs.go:256] generating profile certs ...
	I1105 18:03:46.803788   27131 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:03:46.803806   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt with IP's: []
	I1105 18:03:46.868048   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt ...
	I1105 18:03:46.868073   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt: {Name:mk1b1384fd11cca80823d77e811ce40ed13a39a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.868260   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key ...
	I1105 18:03:46.868273   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key: {Name:mk63b8cd2995063e8f249e25659d0d581c1c609d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.868372   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a
	I1105 18:03:46.868394   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.254]
	I1105 18:03:47.168393   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a ...
	I1105 18:03:47.168422   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a: {Name:mkfb181b3090bd8c3e2b4c01d3e8bebb9949241a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.168598   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a ...
	I1105 18:03:47.168612   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a: {Name:mk8ee51e070e9f8f3516c15edb86d588cc060b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.168716   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:03:47.168827   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:03:47.168910   27131 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:03:47.168929   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt with IP's: []
	I1105 18:03:47.272330   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt ...
	I1105 18:03:47.272363   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt: {Name:mkef37902a8eaa82f4513587418829011c41aa9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.272551   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key ...
	I1105 18:03:47.272567   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key: {Name:mka47632f74c8924a4575ad6d317d9db035f5aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.272701   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:03:47.272727   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:03:47.272746   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:03:47.272764   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:03:47.272788   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:03:47.272803   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:03:47.272820   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:03:47.272860   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:03:47.272935   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:03:47.272983   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:03:47.272995   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:03:47.273029   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:03:47.273061   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:03:47.273095   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:03:47.273147   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:03:47.273189   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.273209   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.273227   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.273815   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:03:47.298487   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:03:47.321311   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:03:47.343337   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:03:47.365041   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 18:03:47.387466   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:03:47.409231   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:03:47.430651   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:03:47.452212   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:03:47.474137   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:03:47.495806   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:03:47.517223   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:03:47.532167   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:03:47.537576   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:03:47.549952   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.556864   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.556922   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.564072   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:03:47.575807   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:03:47.588714   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.593382   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.593445   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.601274   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:03:47.613497   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:03:47.623268   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.627461   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.627512   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.632828   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:03:47.642821   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:03:47.646365   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:03:47.646411   27131 kubeadm.go:392] StartCluster: {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:03:47.646477   27131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:03:47.646544   27131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:03:47.682117   27131 cri.go:89] found id: ""
	I1105 18:03:47.682186   27131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:03:47.691260   27131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 18:03:47.700258   27131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:03:47.708885   27131 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:03:47.708907   27131 kubeadm.go:157] found existing configuration files:
	
	I1105 18:03:47.708950   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:03:47.717439   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:03:47.717497   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:03:47.726246   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:03:47.734558   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:03:47.734611   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:03:47.743183   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:03:47.751387   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:03:47.751433   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:03:47.760203   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:03:47.768178   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:03:47.768234   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:03:47.776770   27131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 18:03:47.967353   27131 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 18:03:59.183523   27131 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 18:03:59.183604   27131 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 18:03:59.183699   27131 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 18:03:59.183848   27131 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 18:03:59.183952   27131 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 18:03:59.184008   27131 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 18:03:59.185602   27131 out.go:235]   - Generating certificates and keys ...
	I1105 18:03:59.185696   27131 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 18:03:59.185773   27131 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 18:03:59.185856   27131 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 18:03:59.185912   27131 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 18:03:59.185997   27131 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 18:03:59.186086   27131 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 18:03:59.186173   27131 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 18:03:59.186341   27131 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-844661 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1105 18:03:59.186418   27131 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 18:03:59.186574   27131 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-844661 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1105 18:03:59.186680   27131 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 18:03:59.186753   27131 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 18:03:59.186826   27131 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 18:03:59.186915   27131 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 18:03:59.187003   27131 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 18:03:59.187068   27131 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 18:03:59.187122   27131 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 18:03:59.187247   27131 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 18:03:59.187350   27131 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 18:03:59.187464   27131 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 18:03:59.187595   27131 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 18:03:59.189162   27131 out.go:235]   - Booting up control plane ...
	I1105 18:03:59.189263   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 18:03:59.189330   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 18:03:59.189411   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 18:03:59.189560   27131 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 18:03:59.189674   27131 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 18:03:59.189732   27131 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 18:03:59.189870   27131 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 18:03:59.190000   27131 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 18:03:59.190063   27131 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.0020676s
	I1105 18:03:59.190152   27131 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 18:03:59.190232   27131 kubeadm.go:310] [api-check] The API server is healthy after 5.797330373s
	I1105 18:03:59.190371   27131 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 18:03:59.190545   27131 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 18:03:59.190621   27131 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 18:03:59.190819   27131 kubeadm.go:310] [mark-control-plane] Marking the node ha-844661 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 18:03:59.190908   27131 kubeadm.go:310] [bootstrap-token] Using token: 87pfeh.t954ki35wy37ojkf
	I1105 18:03:59.192164   27131 out.go:235]   - Configuring RBAC rules ...
	I1105 18:03:59.192251   27131 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 18:03:59.192336   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 18:03:59.192519   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 18:03:59.192749   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 18:03:59.192914   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 18:03:59.193036   27131 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 18:03:59.193159   27131 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 18:03:59.193205   27131 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 18:03:59.193263   27131 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 18:03:59.193287   27131 kubeadm.go:310] 
	I1105 18:03:59.193351   27131 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 18:03:59.193361   27131 kubeadm.go:310] 
	I1105 18:03:59.193483   27131 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 18:03:59.193498   27131 kubeadm.go:310] 
	I1105 18:03:59.193525   27131 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 18:03:59.193576   27131 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 18:03:59.193636   27131 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 18:03:59.193642   27131 kubeadm.go:310] 
	I1105 18:03:59.193690   27131 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 18:03:59.193695   27131 kubeadm.go:310] 
	I1105 18:03:59.193734   27131 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 18:03:59.193739   27131 kubeadm.go:310] 
	I1105 18:03:59.193790   27131 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 18:03:59.193854   27131 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 18:03:59.193915   27131 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 18:03:59.193921   27131 kubeadm.go:310] 
	I1105 18:03:59.193994   27131 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 18:03:59.194085   27131 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 18:03:59.194112   27131 kubeadm.go:310] 
	I1105 18:03:59.194272   27131 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 87pfeh.t954ki35wy37ojkf \
	I1105 18:03:59.194366   27131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 18:03:59.194391   27131 kubeadm.go:310] 	--control-plane 
	I1105 18:03:59.194397   27131 kubeadm.go:310] 
	I1105 18:03:59.194470   27131 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 18:03:59.194483   27131 kubeadm.go:310] 
	I1105 18:03:59.194599   27131 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 87pfeh.t954ki35wy37ojkf \
	I1105 18:03:59.194713   27131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 18:03:59.194723   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:59.194729   27131 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 18:03:59.196416   27131 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1105 18:03:59.198072   27131 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1105 18:03:59.203679   27131 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 18:03:59.203699   27131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1105 18:03:59.220864   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 18:03:59.577751   27131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 18:03:59.577851   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:03:59.577925   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661 minikube.k8s.io/updated_at=2024_11_05T18_03_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=true
	I1105 18:03:59.773949   27131 ops.go:34] apiserver oom_adj: -16
	I1105 18:03:59.774061   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:00.274452   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:00.774925   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:01.274873   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:01.774746   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:02.274653   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:02.410257   27131 kubeadm.go:1113] duration metric: took 2.832479659s to wait for elevateKubeSystemPrivileges
	I1105 18:04:02.410297   27131 kubeadm.go:394] duration metric: took 14.763886485s to StartCluster
	I1105 18:04:02.410318   27131 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:02.410399   27131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:02.411281   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:02.411532   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 18:04:02.411550   27131 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:02.411572   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:04:02.411587   27131 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 18:04:02.411670   27131 addons.go:69] Setting storage-provisioner=true in profile "ha-844661"
	I1105 18:04:02.411690   27131 addons.go:234] Setting addon storage-provisioner=true in "ha-844661"
	I1105 18:04:02.411709   27131 addons.go:69] Setting default-storageclass=true in profile "ha-844661"
	I1105 18:04:02.411717   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:02.411726   27131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-844661"
	I1105 18:04:02.411747   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:02.412164   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.412164   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.412207   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.412212   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.427238   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I1105 18:04:02.427311   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I1105 18:04:02.427732   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.427772   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.428176   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.428198   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.428276   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.428292   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.428565   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.428588   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.428730   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.429124   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.429169   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.430653   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:02.430886   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 18:04:02.431352   27131 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 18:04:02.431554   27131 addons.go:234] Setting addon default-storageclass=true in "ha-844661"
	I1105 18:04:02.431592   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:02.431879   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.431911   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.444788   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1105 18:04:02.445225   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.445776   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.445800   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.446109   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.446308   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.446715   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I1105 18:04:02.447172   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.447626   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.447652   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.447978   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.447989   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:02.448526   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.448566   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.450053   27131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:04:02.451430   27131 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:04:02.451447   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 18:04:02.451465   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:02.453936   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.454325   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:02.454352   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.454596   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:02.454747   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:02.454895   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:02.455039   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:02.463344   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I1105 18:04:02.463824   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.464272   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.464295   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.464580   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.464736   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.466150   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:02.466325   27131 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 18:04:02.466346   27131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 18:04:02.466366   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:02.468861   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.469292   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:02.469320   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.469478   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:02.469641   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:02.469795   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:02.469919   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:02.559386   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 18:04:02.582601   27131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:04:02.634107   27131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 18:04:03.029603   27131 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1105 18:04:03.212900   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.212938   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.212957   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213012   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213238   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213254   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213263   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.213301   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213309   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213317   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213327   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.213335   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213567   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.213576   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.213601   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213608   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213606   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213626   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213684   27131 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 18:04:03.213697   27131 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 18:04:03.213833   27131 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1105 18:04:03.213847   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:03.213858   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:03.213863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:03.230734   27131 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1105 18:04:03.231584   27131 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1105 18:04:03.231606   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:03.231617   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:03.231624   27131 round_trippers.go:473]     Content-Type: application/json
	I1105 18:04:03.231628   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:03.238223   27131 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:04:03.238372   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.238386   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.238717   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.238773   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.238806   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.241254   27131 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1105 18:04:03.242442   27131 addons.go:510] duration metric: took 830.859112ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1105 18:04:03.242476   27131 start.go:246] waiting for cluster config update ...
	I1105 18:04:03.242491   27131 start.go:255] writing updated cluster config ...
	I1105 18:04:03.244187   27131 out.go:201] 
	I1105 18:04:03.246027   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:03.246146   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:03.247790   27131 out.go:177] * Starting "ha-844661-m02" control-plane node in "ha-844661" cluster
	I1105 18:04:03.248926   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:04:03.248959   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:04:03.249079   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:04:03.249097   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:04:03.249198   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:03.249437   27131 start.go:360] acquireMachinesLock for ha-844661-m02: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:04:03.249497   27131 start.go:364] duration metric: took 35.772µs to acquireMachinesLock for "ha-844661-m02"
	I1105 18:04:03.249518   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:03.249605   27131 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1105 18:04:03.251175   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:04:03.251287   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:03.251335   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:03.267010   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I1105 18:04:03.267624   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:03.268242   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:03.268268   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:03.268591   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:03.268765   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:03.268983   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:03.269146   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:04:03.269172   27131 client.go:168] LocalClient.Create starting
	I1105 18:04:03.269203   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:04:03.269237   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:04:03.269249   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:04:03.269297   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:04:03.269315   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:04:03.269325   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:04:03.269338   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:04:03.269353   27131 main.go:141] libmachine: (ha-844661-m02) Calling .PreCreateCheck
	I1105 18:04:03.269514   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:03.269893   27131 main.go:141] libmachine: Creating machine...
	I1105 18:04:03.269906   27131 main.go:141] libmachine: (ha-844661-m02) Calling .Create
	I1105 18:04:03.270065   27131 main.go:141] libmachine: (ha-844661-m02) Creating KVM machine...
	I1105 18:04:03.271308   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found existing default KVM network
	I1105 18:04:03.271402   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found existing private KVM network mk-ha-844661
	I1105 18:04:03.271535   27131 main.go:141] libmachine: (ha-844661-m02) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 ...
	I1105 18:04:03.271561   27131 main.go:141] libmachine: (ha-844661-m02) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:04:03.271623   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.271523   27490 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:04:03.271709   27131 main.go:141] libmachine: (ha-844661-m02) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:04:03.505902   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.505765   27490 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa...
	I1105 18:04:03.597676   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.597557   27490 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/ha-844661-m02.rawdisk...
	I1105 18:04:03.597706   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Writing magic tar header
	I1105 18:04:03.597716   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Writing SSH key tar header
	I1105 18:04:03.597724   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.597692   27490 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 ...
	I1105 18:04:03.597812   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02
	I1105 18:04:03.597845   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:04:03.597903   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:04:03.597916   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 (perms=drwx------)
	I1105 18:04:03.597939   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:04:03.597948   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:04:03.597957   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:04:03.597965   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:04:03.597973   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:04:03.597977   27131 main.go:141] libmachine: (ha-844661-m02) Creating domain...
	I1105 18:04:03.598013   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:04:03.598038   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:04:03.598049   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:04:03.598061   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home
	I1105 18:04:03.598072   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Skipping /home - not owner
	I1105 18:04:03.598898   27131 main.go:141] libmachine: (ha-844661-m02) define libvirt domain using xml: 
	I1105 18:04:03.598916   27131 main.go:141] libmachine: (ha-844661-m02) <domain type='kvm'>
	I1105 18:04:03.598925   27131 main.go:141] libmachine: (ha-844661-m02)   <name>ha-844661-m02</name>
	I1105 18:04:03.598932   27131 main.go:141] libmachine: (ha-844661-m02)   <memory unit='MiB'>2200</memory>
	I1105 18:04:03.598941   27131 main.go:141] libmachine: (ha-844661-m02)   <vcpu>2</vcpu>
	I1105 18:04:03.598947   27131 main.go:141] libmachine: (ha-844661-m02)   <features>
	I1105 18:04:03.598959   27131 main.go:141] libmachine: (ha-844661-m02)     <acpi/>
	I1105 18:04:03.598965   27131 main.go:141] libmachine: (ha-844661-m02)     <apic/>
	I1105 18:04:03.598984   27131 main.go:141] libmachine: (ha-844661-m02)     <pae/>
	I1105 18:04:03.598993   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599024   27131 main.go:141] libmachine: (ha-844661-m02)   </features>
	I1105 18:04:03.599044   27131 main.go:141] libmachine: (ha-844661-m02)   <cpu mode='host-passthrough'>
	I1105 18:04:03.599055   27131 main.go:141] libmachine: (ha-844661-m02)   
	I1105 18:04:03.599061   27131 main.go:141] libmachine: (ha-844661-m02)   </cpu>
	I1105 18:04:03.599069   27131 main.go:141] libmachine: (ha-844661-m02)   <os>
	I1105 18:04:03.599077   27131 main.go:141] libmachine: (ha-844661-m02)     <type>hvm</type>
	I1105 18:04:03.599086   27131 main.go:141] libmachine: (ha-844661-m02)     <boot dev='cdrom'/>
	I1105 18:04:03.599093   27131 main.go:141] libmachine: (ha-844661-m02)     <boot dev='hd'/>
	I1105 18:04:03.599109   27131 main.go:141] libmachine: (ha-844661-m02)     <bootmenu enable='no'/>
	I1105 18:04:03.599120   27131 main.go:141] libmachine: (ha-844661-m02)   </os>
	I1105 18:04:03.599128   27131 main.go:141] libmachine: (ha-844661-m02)   <devices>
	I1105 18:04:03.599142   27131 main.go:141] libmachine: (ha-844661-m02)     <disk type='file' device='cdrom'>
	I1105 18:04:03.599158   27131 main.go:141] libmachine: (ha-844661-m02)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/boot2docker.iso'/>
	I1105 18:04:03.599168   27131 main.go:141] libmachine: (ha-844661-m02)       <target dev='hdc' bus='scsi'/>
	I1105 18:04:03.599177   27131 main.go:141] libmachine: (ha-844661-m02)       <readonly/>
	I1105 18:04:03.599191   27131 main.go:141] libmachine: (ha-844661-m02)     </disk>
	I1105 18:04:03.599203   27131 main.go:141] libmachine: (ha-844661-m02)     <disk type='file' device='disk'>
	I1105 18:04:03.599219   27131 main.go:141] libmachine: (ha-844661-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:04:03.599234   27131 main.go:141] libmachine: (ha-844661-m02)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/ha-844661-m02.rawdisk'/>
	I1105 18:04:03.599245   27131 main.go:141] libmachine: (ha-844661-m02)       <target dev='hda' bus='virtio'/>
	I1105 18:04:03.599254   27131 main.go:141] libmachine: (ha-844661-m02)     </disk>
	I1105 18:04:03.599264   27131 main.go:141] libmachine: (ha-844661-m02)     <interface type='network'>
	I1105 18:04:03.599277   27131 main.go:141] libmachine: (ha-844661-m02)       <source network='mk-ha-844661'/>
	I1105 18:04:03.599295   27131 main.go:141] libmachine: (ha-844661-m02)       <model type='virtio'/>
	I1105 18:04:03.599306   27131 main.go:141] libmachine: (ha-844661-m02)     </interface>
	I1105 18:04:03.599316   27131 main.go:141] libmachine: (ha-844661-m02)     <interface type='network'>
	I1105 18:04:03.599328   27131 main.go:141] libmachine: (ha-844661-m02)       <source network='default'/>
	I1105 18:04:03.599336   27131 main.go:141] libmachine: (ha-844661-m02)       <model type='virtio'/>
	I1105 18:04:03.599346   27131 main.go:141] libmachine: (ha-844661-m02)     </interface>
	I1105 18:04:03.599360   27131 main.go:141] libmachine: (ha-844661-m02)     <serial type='pty'>
	I1105 18:04:03.599371   27131 main.go:141] libmachine: (ha-844661-m02)       <target port='0'/>
	I1105 18:04:03.599379   27131 main.go:141] libmachine: (ha-844661-m02)     </serial>
	I1105 18:04:03.599388   27131 main.go:141] libmachine: (ha-844661-m02)     <console type='pty'>
	I1105 18:04:03.599395   27131 main.go:141] libmachine: (ha-844661-m02)       <target type='serial' port='0'/>
	I1105 18:04:03.599405   27131 main.go:141] libmachine: (ha-844661-m02)     </console>
	I1105 18:04:03.599414   27131 main.go:141] libmachine: (ha-844661-m02)     <rng model='virtio'>
	I1105 18:04:03.599426   27131 main.go:141] libmachine: (ha-844661-m02)       <backend model='random'>/dev/random</backend>
	I1105 18:04:03.599433   27131 main.go:141] libmachine: (ha-844661-m02)     </rng>
	I1105 18:04:03.599441   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599450   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599458   27131 main.go:141] libmachine: (ha-844661-m02)   </devices>
	I1105 18:04:03.599468   27131 main.go:141] libmachine: (ha-844661-m02) </domain>
	I1105 18:04:03.599478   27131 main.go:141] libmachine: (ha-844661-m02) 
	I1105 18:04:03.606202   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:bc:44:b3 in network default
	I1105 18:04:03.606844   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring networks are active...
	I1105 18:04:03.606873   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:03.607579   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring network default is active
	I1105 18:04:03.607877   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring network mk-ha-844661 is active
	I1105 18:04:03.608339   27131 main.go:141] libmachine: (ha-844661-m02) Getting domain xml...
	I1105 18:04:03.609124   27131 main.go:141] libmachine: (ha-844661-m02) Creating domain...
	I1105 18:04:04.804854   27131 main.go:141] libmachine: (ha-844661-m02) Waiting to get IP...
	I1105 18:04:04.805676   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:04.806067   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:04.806128   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:04.806059   27490 retry.go:31] will retry after 221.645511ms: waiting for machine to come up
	I1105 18:04:05.029505   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.029976   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.030010   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.029926   27490 retry.go:31] will retry after 382.599739ms: waiting for machine to come up
	I1105 18:04:05.414471   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.414907   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.414933   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.414864   27490 retry.go:31] will retry after 327.048237ms: waiting for machine to come up
	I1105 18:04:05.743302   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.743771   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.743804   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.743710   27490 retry.go:31] will retry after 518.430277ms: waiting for machine to come up
	I1105 18:04:06.263310   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:06.263829   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:06.263853   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:06.263789   27490 retry.go:31] will retry after 629.481848ms: waiting for machine to come up
	I1105 18:04:06.894494   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:06.895089   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:06.895118   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:06.895038   27490 retry.go:31] will retry after 880.755684ms: waiting for machine to come up
	I1105 18:04:07.777105   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:07.777585   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:07.777629   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:07.777517   27490 retry.go:31] will retry after 728.781586ms: waiting for machine to come up
	I1105 18:04:08.507833   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:08.508322   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:08.508350   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:08.508268   27490 retry.go:31] will retry after 1.405343367s: waiting for machine to come up
	I1105 18:04:09.915737   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:09.916175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:09.916206   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:09.916130   27490 retry.go:31] will retry after 1.614277424s: waiting for machine to come up
	I1105 18:04:11.532132   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:11.532606   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:11.532651   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:11.532528   27490 retry.go:31] will retry after 2.182290087s: waiting for machine to come up
	I1105 18:04:13.716671   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:13.717064   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:13.717090   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:13.717036   27490 retry.go:31] will retry after 2.181711488s: waiting for machine to come up
	I1105 18:04:15.901246   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:15.901742   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:15.901769   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:15.901678   27490 retry.go:31] will retry after 3.553887492s: waiting for machine to come up
	I1105 18:04:19.457631   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:19.458252   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:19.458280   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:19.458200   27490 retry.go:31] will retry after 2.842714356s: waiting for machine to come up
	I1105 18:04:22.304175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:22.304555   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:22.304577   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:22.304516   27490 retry.go:31] will retry after 4.429177675s: waiting for machine to come up
	I1105 18:04:26.738445   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.738953   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has current primary IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.739021   27131 main.go:141] libmachine: (ha-844661-m02) Found IP for machine: 192.168.39.38
	I1105 18:04:26.739034   27131 main.go:141] libmachine: (ha-844661-m02) Reserving static IP address...
	I1105 18:04:26.739350   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find host DHCP lease matching {name: "ha-844661-m02", mac: "52:54:00:46:71:ad", ip: "192.168.39.38"} in network mk-ha-844661
	I1105 18:04:26.812299   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Getting to WaitForSSH function...
	I1105 18:04:26.812324   27131 main.go:141] libmachine: (ha-844661-m02) Reserved static IP address: 192.168.39.38
	I1105 18:04:26.812336   27131 main.go:141] libmachine: (ha-844661-m02) Waiting for SSH to be available...
	I1105 18:04:26.815175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.815513   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661
	I1105 18:04:26.815540   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find defined IP address of network mk-ha-844661 interface with MAC address 52:54:00:46:71:ad
	I1105 18:04:26.815668   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH client type: external
	I1105 18:04:26.815699   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa (-rw-------)
	I1105 18:04:26.815752   27131 main.go:141] libmachine: (ha-844661-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:04:26.815781   27131 main.go:141] libmachine: (ha-844661-m02) DBG | About to run SSH command:
	I1105 18:04:26.815798   27131 main.go:141] libmachine: (ha-844661-m02) DBG | exit 0
	I1105 18:04:26.819693   27131 main.go:141] libmachine: (ha-844661-m02) DBG | SSH cmd err, output: exit status 255: 
	I1105 18:04:26.819710   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1105 18:04:26.819733   27131 main.go:141] libmachine: (ha-844661-m02) DBG | command : exit 0
	I1105 18:04:26.819747   27131 main.go:141] libmachine: (ha-844661-m02) DBG | err     : exit status 255
	I1105 18:04:26.819758   27131 main.go:141] libmachine: (ha-844661-m02) DBG | output  : 
	I1105 18:04:29.821203   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Getting to WaitForSSH function...
	I1105 18:04:29.823337   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.823729   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:29.823762   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.823872   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH client type: external
	I1105 18:04:29.823894   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa (-rw-------)
	I1105 18:04:29.823922   27131 main.go:141] libmachine: (ha-844661-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:04:29.823940   27131 main.go:141] libmachine: (ha-844661-m02) DBG | About to run SSH command:
	I1105 18:04:29.823952   27131 main.go:141] libmachine: (ha-844661-m02) DBG | exit 0
	I1105 18:04:29.951085   27131 main.go:141] libmachine: (ha-844661-m02) DBG | SSH cmd err, output: <nil>: 
	I1105 18:04:29.951342   27131 main.go:141] libmachine: (ha-844661-m02) KVM machine creation complete!
	I1105 18:04:29.951700   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:29.952363   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:29.952587   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:29.952760   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:04:29.952794   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetState
	I1105 18:04:29.954134   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:04:29.954148   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:04:29.954153   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:04:29.954158   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:29.956382   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.956701   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:29.956727   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.956885   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:29.957041   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:29.957158   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:29.957245   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:29.957384   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:29.957587   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:29.957598   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:04:30.062109   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:04:30.062134   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:04:30.062144   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.064857   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.065391   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.065423   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.065611   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.065805   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.065970   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.066128   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.066292   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.066496   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.066512   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:04:30.175484   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:04:30.175559   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:04:30.175573   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:04:30.175583   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.175860   27131 buildroot.go:166] provisioning hostname "ha-844661-m02"
	I1105 18:04:30.175892   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.176101   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.178534   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.178884   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.178952   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.179036   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.179212   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.179364   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.179519   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.179693   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.179914   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.179935   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661-m02 && echo "ha-844661-m02" | sudo tee /etc/hostname
	I1105 18:04:30.302286   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661-m02
	
	I1105 18:04:30.302313   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.305041   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.305376   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.305397   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.305565   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.305735   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.305864   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.306027   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.306153   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.306345   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.306368   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:04:30.418880   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:04:30.418913   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:04:30.418933   27131 buildroot.go:174] setting up certificates
	I1105 18:04:30.418944   27131 provision.go:84] configureAuth start
	I1105 18:04:30.418958   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.419230   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:30.421818   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.422198   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.422218   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.422357   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.424553   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.424893   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.424934   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.425058   27131 provision.go:143] copyHostCerts
	I1105 18:04:30.425085   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:04:30.425123   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:04:30.425135   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:04:30.425209   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:04:30.425294   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:04:30.425312   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:04:30.425316   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:04:30.425339   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:04:30.425392   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:04:30.425411   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:04:30.425417   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:04:30.425437   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:04:30.425500   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661-m02 san=[127.0.0.1 192.168.39.38 ha-844661-m02 localhost minikube]
	I1105 18:04:30.669687   27131 provision.go:177] copyRemoteCerts
	I1105 18:04:30.669745   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:04:30.669767   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.672398   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.672764   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.672792   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.672964   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.673166   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.673319   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.673440   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:30.757634   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:04:30.757707   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:04:30.779929   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:04:30.779991   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:04:30.802282   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:04:30.802340   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:04:30.824080   27131 provision.go:87] duration metric: took 405.122043ms to configureAuth
	I1105 18:04:30.824105   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:04:30.824267   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:30.824337   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.826767   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.827187   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.827210   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.827374   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.827574   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.827761   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.827911   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.828074   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.828241   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.828257   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:04:31.054134   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:04:31.054167   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:04:31.054177   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetURL
	I1105 18:04:31.055397   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using libvirt version 6000000
	I1105 18:04:31.057579   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.057909   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.057942   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.058035   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:04:31.058055   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:04:31.058063   27131 client.go:171] duration metric: took 27.788882282s to LocalClient.Create
	I1105 18:04:31.058089   27131 start.go:167] duration metric: took 27.788944247s to libmachine.API.Create "ha-844661"
	I1105 18:04:31.058102   27131 start.go:293] postStartSetup for "ha-844661-m02" (driver="kvm2")
	I1105 18:04:31.058116   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:04:31.058140   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.058392   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:04:31.058416   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.060812   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.061181   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.061207   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.061372   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.061520   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.061638   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.061750   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.141343   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:04:31.145282   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:04:31.145305   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:04:31.145386   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:04:31.145475   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:04:31.145487   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:04:31.145583   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:04:31.154867   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:04:31.177214   27131 start.go:296] duration metric: took 119.098287ms for postStartSetup
	I1105 18:04:31.177266   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:31.177795   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:31.180218   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.180581   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.180609   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.180893   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:31.181127   27131 start.go:128] duration metric: took 27.931509235s to createHost
	I1105 18:04:31.181151   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.183589   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.183931   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.183977   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.184093   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.184255   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.184473   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.184627   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.184776   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:31.184927   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:31.184936   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:04:31.291832   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829871.274251077
	
	I1105 18:04:31.291862   27131 fix.go:216] guest clock: 1730829871.274251077
	I1105 18:04:31.291873   27131 fix.go:229] Guest: 2024-11-05 18:04:31.274251077 +0000 UTC Remote: 2024-11-05 18:04:31.181141215 +0000 UTC m=+70.565834196 (delta=93.109862ms)
	I1105 18:04:31.291893   27131 fix.go:200] guest clock delta is within tolerance: 93.109862ms
	I1105 18:04:31.291902   27131 start.go:83] releasing machines lock for "ha-844661-m02", held for 28.042391542s
	I1105 18:04:31.291933   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.292188   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:31.294847   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.295152   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.295182   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.297372   27131 out.go:177] * Found network options:
	I1105 18:04:31.298882   27131 out.go:177]   - NO_PROXY=192.168.39.48
	W1105 18:04:31.300182   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:04:31.300214   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.300744   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.300953   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.301049   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:04:31.301078   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	W1105 18:04:31.301139   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:04:31.301229   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:04:31.301249   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.303834   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304115   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304147   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.304164   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304340   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.304518   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.304656   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.304683   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304705   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.304817   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.304875   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.304966   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.305123   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.305293   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.537813   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:04:31.543318   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:04:31.543380   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:04:31.558192   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:04:31.558214   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:04:31.558265   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:04:31.574444   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:04:31.588020   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:04:31.588073   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:04:31.601225   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:04:31.614872   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:04:31.742673   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:04:31.906474   27131 docker.go:233] disabling docker service ...
	I1105 18:04:31.906547   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:04:31.920407   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:04:31.932829   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:04:32.065646   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:04:32.198693   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:04:32.211636   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:04:32.228537   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:04:32.228604   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.238359   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:04:32.238426   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.248245   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.258019   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.267772   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:04:32.277903   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.287745   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.304428   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.315166   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:04:32.324687   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:04:32.324739   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:04:32.338701   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:04:32.349299   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:32.473469   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:04:32.562263   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:04:32.562341   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:04:32.567966   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:04:32.568012   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:04:32.571415   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:04:32.608501   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:04:32.608591   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:04:32.636314   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:04:32.664649   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:04:32.666073   27131 out.go:177]   - env NO_PROXY=192.168.39.48
	I1105 18:04:32.667578   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:32.670054   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:32.670404   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:32.670434   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:32.670640   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:04:32.675107   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:04:32.687100   27131 mustload.go:65] Loading cluster: ha-844661
	I1105 18:04:32.687313   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:32.687563   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:32.687614   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:32.702173   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I1105 18:04:32.702544   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:32.703040   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:32.703059   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:32.703356   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:32.703527   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:32.705121   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:32.705395   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:32.705427   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:32.719590   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I1105 18:04:32.719963   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:32.720450   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:32.720471   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:32.720753   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:32.720928   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:32.721076   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.38
	I1105 18:04:32.721087   27131 certs.go:194] generating shared ca certs ...
	I1105 18:04:32.721099   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.721216   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:04:32.721253   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:04:32.721262   27131 certs.go:256] generating profile certs ...
	I1105 18:04:32.721325   27131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:04:32.721348   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8
	I1105 18:04:32.721359   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.254]
	I1105 18:04:32.817294   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 ...
	I1105 18:04:32.817319   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8: {Name:mk45feacdbeaf35fb15921aeeafdbedf19f7f2ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.817474   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8 ...
	I1105 18:04:32.817487   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8: {Name:mkf0dcf762cb289770c94346689eba9d112e92a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.817551   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:04:32.817676   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:04:32.817799   27131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:04:32.817813   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:04:32.817827   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:04:32.817838   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:04:32.817853   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:04:32.817867   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:04:32.817879   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:04:32.817890   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:04:32.817899   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:04:32.817954   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:04:32.817983   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:04:32.817992   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:04:32.818014   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:04:32.818034   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:04:32.818055   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:04:32.818093   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:04:32.818118   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:04:32.818132   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:04:32.818145   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:32.818175   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:32.821627   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:32.822087   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:32.822115   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:32.822324   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:32.822514   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:32.822635   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:32.822754   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:32.895384   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:04:32.901151   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:04:32.911563   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:04:32.916135   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1105 18:04:32.926023   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:04:32.929795   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:04:32.939479   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:04:32.943460   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:04:32.953743   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:04:32.957464   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:04:32.967126   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:04:32.971370   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 18:04:32.981265   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:04:33.005948   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:04:33.028537   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:04:33.051691   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:04:33.077296   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 18:04:33.099924   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:04:33.122118   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:04:33.144496   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:04:33.167061   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:04:33.189719   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:04:33.212311   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:04:33.234431   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:04:33.249569   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1105 18:04:33.264947   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:04:33.280382   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:04:33.295047   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:04:33.310658   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 18:04:33.325227   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:04:33.340438   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:04:33.345637   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:04:33.355163   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.359277   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.359332   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.364640   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:04:33.374197   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:04:33.383883   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.388205   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.388269   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.393534   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:04:33.403611   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:04:33.413496   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.417522   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.417572   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.422911   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:04:33.432783   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:04:33.436475   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:04:33.436531   27131 kubeadm.go:934] updating node {m02 192.168.39.38 8443 v1.31.2 crio true true} ...
	I1105 18:04:33.436634   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:04:33.436658   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:04:33.436695   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:04:33.453065   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:04:33.453148   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:04:33.453221   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:04:33.462691   27131 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 18:04:33.462762   27131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 18:04:33.472553   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 18:04:33.472563   27131 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1105 18:04:33.472583   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:04:33.472584   27131 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1105 18:04:33.472655   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:04:33.477105   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 18:04:33.477133   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 18:04:34.400283   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:04:34.400361   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:04:34.405010   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 18:04:34.405045   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 18:04:34.538786   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:04:34.578282   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:04:34.578382   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:04:34.588498   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 18:04:34.588540   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 18:04:34.951438   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:04:34.960448   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1105 18:04:34.976680   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:04:34.992424   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:04:35.007877   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:04:35.011593   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:04:35.023033   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:35.153794   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:04:35.171325   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:35.171790   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:35.171844   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:35.187008   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I1105 18:04:35.187511   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:35.188000   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:35.188021   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:35.188401   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:35.188593   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:35.188755   27131 start.go:317] joinCluster: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:04:35.188861   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 18:04:35.188876   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:35.192373   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:35.193007   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:35.193036   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:35.193153   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:35.193322   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:35.193493   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:35.193633   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:35.352325   27131 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:35.352369   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token io85g1.ce9beps1a5sdfopc --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m02 --control-plane --apiserver-advertise-address=192.168.39.38 --apiserver-bind-port=8443"
	I1105 18:04:56.900009   27131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token io85g1.ce9beps1a5sdfopc --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m02 --control-plane --apiserver-advertise-address=192.168.39.38 --apiserver-bind-port=8443": (21.547609543s)
	I1105 18:04:56.900049   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 18:04:57.434153   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661-m02 minikube.k8s.io/updated_at=2024_11_05T18_04_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=false
	I1105 18:04:57.562849   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844661-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 18:04:57.694503   27131 start.go:319] duration metric: took 22.505743601s to joinCluster
	I1105 18:04:57.694592   27131 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:57.694912   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:57.695940   27131 out.go:177] * Verifying Kubernetes components...
	I1105 18:04:57.697102   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:57.983429   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:04:58.029548   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:58.029888   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:04:58.029994   27131 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.48:8443
	I1105 18:04:58.030271   27131 node_ready.go:35] waiting up to 6m0s for node "ha-844661-m02" to be "Ready" ...
	I1105 18:04:58.030407   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:58.030418   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:58.030429   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:58.030436   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:58.043836   27131 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1105 18:04:58.531097   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:58.531124   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:58.531135   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:58.531142   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:58.543712   27131 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1105 18:04:59.030878   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:59.030899   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:59.030908   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:59.030912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:59.035656   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:04:59.530596   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:59.530621   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:59.530633   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:59.530639   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:59.534120   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:00.030984   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:00.031006   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:00.031014   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:00.031017   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:00.034282   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:00.035034   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:00.530821   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:00.530846   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:00.530858   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:00.530864   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:00.536618   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:05:01.031310   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:01.031331   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:01.031340   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:01.031345   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:01.034641   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:01.530557   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:01.530578   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:01.530588   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:01.530595   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:01.539049   27131 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1105 18:05:02.031172   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:02.031197   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:02.031206   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:02.031210   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:02.034664   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:02.035295   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:02.531134   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:02.531158   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:02.531168   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:02.531173   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:02.534691   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:03.030649   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:03.030676   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:03.030684   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:03.030689   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:03.034294   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:03.531341   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:03.531362   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:03.531370   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:03.531374   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:03.534345   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:04.031389   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:04.031412   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:04.031420   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:04.031425   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:04.034432   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:04.531089   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:04.531121   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:04.531130   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:04.531134   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:04.534592   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:04.535270   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:05.030583   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:05.030606   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:05.030614   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:05.030618   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:05.034321   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:05.530714   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:05.530735   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:05.530744   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:05.530748   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:05.534305   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:06.031071   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:06.031093   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:06.031101   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:06.031105   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:06.034416   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:06.531473   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:06.531497   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:06.531506   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:06.531513   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:06.534473   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:07.030494   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:07.030518   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:07.030526   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:07.030530   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:07.033934   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:07.034429   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:07.530834   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:07.530861   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:07.530871   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:07.530876   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:07.534136   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:08.031065   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:08.031086   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:08.031094   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:08.031097   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:08.034490   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:08.530752   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:08.530774   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:08.530782   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:08.530787   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:08.534189   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:09.030956   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:09.030998   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:09.031007   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:09.031013   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:09.034514   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:09.035140   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:09.531531   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:09.531558   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:09.531569   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:09.531577   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:09.534682   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:10.030566   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:10.030603   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:10.030611   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:10.030615   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:10.034288   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:10.530760   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:10.530786   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:10.530797   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:10.530803   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:10.535094   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:11.031135   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:11.031156   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:11.031164   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:11.031167   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:11.034996   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:11.035590   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:11.530958   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:11.531025   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:11.531033   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:11.531036   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:11.534280   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:12.031192   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:12.031217   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:12.031226   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:12.031229   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:12.034799   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:12.530835   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:12.530859   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:12.530866   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:12.530871   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:12.535212   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:13.031138   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:13.031161   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:13.031168   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:13.031174   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:13.035138   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:13.035640   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:13.531336   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:13.531361   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:13.531372   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:13.531377   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:13.534343   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:14.031248   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:14.031269   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:14.031277   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:14.031280   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:14.034318   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:14.531121   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:14.531144   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:14.531152   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:14.531156   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:14.534522   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.031444   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:15.031471   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:15.031481   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:15.031485   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:15.035107   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.531231   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:15.531259   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:15.531295   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:15.531301   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:15.534694   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.535240   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:16.031143   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:16.031166   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:16.031174   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:16.031178   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:16.034542   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:16.530558   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:16.530585   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:16.530592   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:16.530596   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:16.534438   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.031334   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.031354   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.031363   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.031377   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.034859   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.530585   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.530609   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.530617   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.530621   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.534242   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.534822   27131 node_ready.go:49] node "ha-844661-m02" has status "Ready":"True"
	I1105 18:05:17.534842   27131 node_ready.go:38] duration metric: took 19.504524126s for node "ha-844661-m02" to be "Ready" ...
	I1105 18:05:17.534853   27131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:05:17.534933   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:17.534945   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.534955   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.534962   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.539957   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:17.545365   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.545456   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4bdfz
	I1105 18:05:17.545468   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.545479   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.545485   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.548667   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.549324   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.549340   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.549350   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.549355   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.552460   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.553059   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.553079   27131 pod_ready.go:82] duration metric: took 7.687809ms for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.553089   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.553143   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5g97
	I1105 18:05:17.553151   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.553157   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.553161   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.556133   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.556688   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.556701   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.556708   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.556711   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.559655   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.560102   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.560125   27131 pod_ready.go:82] duration metric: took 7.028626ms for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.560138   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.560192   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661
	I1105 18:05:17.560200   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.560207   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.560211   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.563041   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.563593   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.563605   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.563612   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.563617   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.566382   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.566799   27131 pod_ready.go:93] pod "etcd-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.566816   27131 pod_ready.go:82] duration metric: took 6.672004ms for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.566824   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.566881   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m02
	I1105 18:05:17.566890   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.566897   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.566901   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.570076   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.570614   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.570630   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.570639   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.570644   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.574134   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.574566   27131 pod_ready.go:93] pod "etcd-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.574584   27131 pod_ready.go:82] duration metric: took 7.753168ms for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.574604   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.730613   27131 request.go:632] Waited for 155.951288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:05:17.730716   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:05:17.730738   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.730750   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.730756   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.734460   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.931599   27131 request.go:632] Waited for 196.455308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.931691   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.931703   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.931714   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.931720   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.935472   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.936248   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.936270   27131 pod_ready.go:82] duration metric: took 361.658171ms for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.936283   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.131401   27131 request.go:632] Waited for 195.044956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:05:18.131499   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:05:18.131506   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.131514   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.131520   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.135482   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.331525   27131 request.go:632] Waited for 195.194468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:18.331593   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:18.331598   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.331605   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.331610   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.334692   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.335419   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:18.335438   27131 pod_ready.go:82] duration metric: took 399.143957ms for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.335449   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.530629   27131 request.go:632] Waited for 195.065538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:05:18.530715   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:05:18.530724   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.530734   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.530747   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.534793   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:18.731049   27131 request.go:632] Waited for 195.44458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:18.731128   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:18.731134   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.731143   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.731148   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.734646   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.735269   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:18.735297   27131 pod_ready.go:82] duration metric: took 399.840715ms for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.735311   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.931233   27131 request.go:632] Waited for 195.850053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:05:18.931303   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:05:18.931310   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.931320   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.931326   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.935301   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.131408   27131 request.go:632] Waited for 195.30965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.131471   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.131476   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.131483   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.131487   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.134983   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.135599   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.135639   27131 pod_ready.go:82] duration metric: took 400.298272ms for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.135650   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.330670   27131 request.go:632] Waited for 194.9293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:05:19.330729   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:05:19.330734   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.330741   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.330745   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.334278   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.531215   27131 request.go:632] Waited for 196.368669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:19.531275   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:19.531280   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.531287   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.531290   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.535032   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.535778   27131 pod_ready.go:93] pod "kube-proxy-pjpkh" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.535799   27131 pod_ready.go:82] duration metric: took 400.142488ms for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.535811   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.730859   27131 request.go:632] Waited for 194.981031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:05:19.730957   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:05:19.730981   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.730993   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.731003   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.734505   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.931630   27131 request.go:632] Waited for 196.356772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.931695   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.931703   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.931713   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.931721   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.934664   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:19.935138   27131 pod_ready.go:93] pod "kube-proxy-zsbfs" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.935158   27131 pod_ready.go:82] duration metric: took 399.338721ms for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.935171   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.131253   27131 request.go:632] Waited for 196.012842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:05:20.131339   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:05:20.131346   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.131354   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.131365   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.135136   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.331213   27131 request.go:632] Waited for 195.465792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:20.331270   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:20.331276   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.331283   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.331287   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.334310   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.334872   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:20.334894   27131 pod_ready.go:82] duration metric: took 399.711008ms for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.334908   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.531014   27131 request.go:632] Waited for 195.998146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:05:20.531072   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:05:20.531077   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.531084   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.531092   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.534503   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.731389   27131 request.go:632] Waited for 196.312857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:20.731476   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:20.731488   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.731496   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.731502   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.734866   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.735369   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:20.735387   27131 pod_ready.go:82] duration metric: took 400.467875ms for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.735398   27131 pod_ready.go:39] duration metric: took 3.200533347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:05:20.735415   27131 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:05:20.735464   27131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:05:20.751422   27131 api_server.go:72] duration metric: took 23.056783291s to wait for apiserver process to appear ...
	I1105 18:05:20.751455   27131 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:05:20.751507   27131 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1105 18:05:20.755872   27131 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1105 18:05:20.755957   27131 round_trippers.go:463] GET https://192.168.39.48:8443/version
	I1105 18:05:20.755969   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.755980   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.755990   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.756842   27131 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 18:05:20.756943   27131 api_server.go:141] control plane version: v1.31.2
	I1105 18:05:20.756968   27131 api_server.go:131] duration metric: took 5.494459ms to wait for apiserver health ...
	I1105 18:05:20.756978   27131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:05:20.930580   27131 request.go:632] Waited for 173.520285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:20.930658   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:20.930664   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.930672   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.930676   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.935815   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:05:20.939904   27131 system_pods.go:59] 17 kube-system pods found
	I1105 18:05:20.939939   27131 system_pods.go:61] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:05:20.939945   27131 system_pods.go:61] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:05:20.939949   27131 system_pods.go:61] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:05:20.939952   27131 system_pods.go:61] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:05:20.939955   27131 system_pods.go:61] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:05:20.939959   27131 system_pods.go:61] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:05:20.939962   27131 system_pods.go:61] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:05:20.939965   27131 system_pods.go:61] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:05:20.939968   27131 system_pods.go:61] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:05:20.939977   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:05:20.939981   27131 system_pods.go:61] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:05:20.939984   27131 system_pods.go:61] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:05:20.939989   27131 system_pods.go:61] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:05:20.939992   27131 system_pods.go:61] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:05:20.939997   27131 system_pods.go:61] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:05:20.940003   27131 system_pods.go:61] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:05:20.940006   27131 system_pods.go:61] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:05:20.940012   27131 system_pods.go:74] duration metric: took 183.024873ms to wait for pod list to return data ...
	I1105 18:05:20.940022   27131 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:05:21.131476   27131 request.go:632] Waited for 191.3776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:05:21.131535   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:05:21.131540   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.131548   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.131552   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.135052   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:21.135309   27131 default_sa.go:45] found service account: "default"
	I1105 18:05:21.135328   27131 default_sa.go:55] duration metric: took 195.299598ms for default service account to be created ...
	I1105 18:05:21.135339   27131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:05:21.330735   27131 request.go:632] Waited for 195.314096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:21.330794   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:21.330799   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.330807   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.330810   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.335501   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:21.339693   27131 system_pods.go:86] 17 kube-system pods found
	I1105 18:05:21.339720   27131 system_pods.go:89] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:05:21.339726   27131 system_pods.go:89] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:05:21.339731   27131 system_pods.go:89] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:05:21.339734   27131 system_pods.go:89] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:05:21.339738   27131 system_pods.go:89] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:05:21.339741   27131 system_pods.go:89] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:05:21.339745   27131 system_pods.go:89] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:05:21.339748   27131 system_pods.go:89] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:05:21.339751   27131 system_pods.go:89] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:05:21.339755   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:05:21.339759   27131 system_pods.go:89] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:05:21.339762   27131 system_pods.go:89] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:05:21.339765   27131 system_pods.go:89] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:05:21.339769   27131 system_pods.go:89] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:05:21.339774   27131 system_pods.go:89] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:05:21.339779   27131 system_pods.go:89] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:05:21.339782   27131 system_pods.go:89] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:05:21.339788   27131 system_pods.go:126] duration metric: took 204.442408ms to wait for k8s-apps to be running ...
	I1105 18:05:21.339802   27131 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:05:21.339842   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:05:21.354615   27131 system_svc.go:56] duration metric: took 14.795984ms WaitForService to wait for kubelet
	I1105 18:05:21.354651   27131 kubeadm.go:582] duration metric: took 23.660015871s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:05:21.354696   27131 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:05:21.531068   27131 request.go:632] Waited for 176.291328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes
	I1105 18:05:21.531146   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes
	I1105 18:05:21.531151   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.531159   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.531164   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.534798   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:21.535495   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:05:21.535541   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:05:21.535562   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:05:21.535565   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:05:21.535570   27131 node_conditions.go:105] duration metric: took 180.868401ms to run NodePressure ...
	I1105 18:05:21.535585   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:05:21.535607   27131 start.go:255] writing updated cluster config ...
	I1105 18:05:21.537763   27131 out.go:201] 
	I1105 18:05:21.539166   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:21.539250   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:21.540660   27131 out.go:177] * Starting "ha-844661-m03" control-plane node in "ha-844661" cluster
	I1105 18:05:21.541637   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:05:21.541660   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:05:21.541776   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:05:21.541788   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:05:21.541886   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:21.542068   27131 start.go:360] acquireMachinesLock for ha-844661-m03: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:05:21.542109   27131 start.go:364] duration metric: took 21.826µs to acquireMachinesLock for "ha-844661-m03"
	I1105 18:05:21.542124   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:05:21.542209   27131 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1105 18:05:21.543860   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:05:21.543943   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:21.543975   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:21.559283   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1105 18:05:21.559671   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:21.560085   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:21.560107   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:21.560440   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:21.560618   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:21.560762   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:21.560967   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:05:21.560994   27131 client.go:168] LocalClient.Create starting
	I1105 18:05:21.561031   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:05:21.561079   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:05:21.561096   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:05:21.561164   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:05:21.561192   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:05:21.561207   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:05:21.561232   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:05:21.561244   27131 main.go:141] libmachine: (ha-844661-m03) Calling .PreCreateCheck
	I1105 18:05:21.561482   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:21.561876   27131 main.go:141] libmachine: Creating machine...
	I1105 18:05:21.561887   27131 main.go:141] libmachine: (ha-844661-m03) Calling .Create
	I1105 18:05:21.562039   27131 main.go:141] libmachine: (ha-844661-m03) Creating KVM machine...
	I1105 18:05:21.563199   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found existing default KVM network
	I1105 18:05:21.563316   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found existing private KVM network mk-ha-844661
	I1105 18:05:21.563415   27131 main.go:141] libmachine: (ha-844661-m03) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 ...
	I1105 18:05:21.563439   27131 main.go:141] libmachine: (ha-844661-m03) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:05:21.563512   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.563393   27902 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:05:21.563587   27131 main.go:141] libmachine: (ha-844661-m03) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:05:21.796365   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.796229   27902 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa...
	I1105 18:05:21.882674   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.882568   27902 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/ha-844661-m03.rawdisk...
	I1105 18:05:21.882702   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Writing magic tar header
	I1105 18:05:21.882713   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Writing SSH key tar header
	I1105 18:05:21.882768   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.882708   27902 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 ...
	I1105 18:05:21.882834   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03
	I1105 18:05:21.882863   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 (perms=drwx------)
	I1105 18:05:21.882876   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:05:21.882896   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:05:21.882908   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:05:21.882922   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:05:21.882944   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:05:21.882956   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:05:21.883017   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home
	I1105 18:05:21.883034   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Skipping /home - not owner
	I1105 18:05:21.883044   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:05:21.883057   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:05:21.883070   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:05:21.883081   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:05:21.883089   27131 main.go:141] libmachine: (ha-844661-m03) Creating domain...
	I1105 18:05:21.883931   27131 main.go:141] libmachine: (ha-844661-m03) define libvirt domain using xml: 
	I1105 18:05:21.883952   27131 main.go:141] libmachine: (ha-844661-m03) <domain type='kvm'>
	I1105 18:05:21.883976   27131 main.go:141] libmachine: (ha-844661-m03)   <name>ha-844661-m03</name>
	I1105 18:05:21.883997   27131 main.go:141] libmachine: (ha-844661-m03)   <memory unit='MiB'>2200</memory>
	I1105 18:05:21.884009   27131 main.go:141] libmachine: (ha-844661-m03)   <vcpu>2</vcpu>
	I1105 18:05:21.884020   27131 main.go:141] libmachine: (ha-844661-m03)   <features>
	I1105 18:05:21.884028   27131 main.go:141] libmachine: (ha-844661-m03)     <acpi/>
	I1105 18:05:21.884038   27131 main.go:141] libmachine: (ha-844661-m03)     <apic/>
	I1105 18:05:21.884046   27131 main.go:141] libmachine: (ha-844661-m03)     <pae/>
	I1105 18:05:21.884056   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884078   27131 main.go:141] libmachine: (ha-844661-m03)   </features>
	I1105 18:05:21.884099   27131 main.go:141] libmachine: (ha-844661-m03)   <cpu mode='host-passthrough'>
	I1105 18:05:21.884109   27131 main.go:141] libmachine: (ha-844661-m03)   
	I1105 18:05:21.884119   27131 main.go:141] libmachine: (ha-844661-m03)   </cpu>
	I1105 18:05:21.884129   27131 main.go:141] libmachine: (ha-844661-m03)   <os>
	I1105 18:05:21.884134   27131 main.go:141] libmachine: (ha-844661-m03)     <type>hvm</type>
	I1105 18:05:21.884144   27131 main.go:141] libmachine: (ha-844661-m03)     <boot dev='cdrom'/>
	I1105 18:05:21.884151   27131 main.go:141] libmachine: (ha-844661-m03)     <boot dev='hd'/>
	I1105 18:05:21.884159   27131 main.go:141] libmachine: (ha-844661-m03)     <bootmenu enable='no'/>
	I1105 18:05:21.884169   27131 main.go:141] libmachine: (ha-844661-m03)   </os>
	I1105 18:05:21.884183   27131 main.go:141] libmachine: (ha-844661-m03)   <devices>
	I1105 18:05:21.884200   27131 main.go:141] libmachine: (ha-844661-m03)     <disk type='file' device='cdrom'>
	I1105 18:05:21.884216   27131 main.go:141] libmachine: (ha-844661-m03)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/boot2docker.iso'/>
	I1105 18:05:21.884227   27131 main.go:141] libmachine: (ha-844661-m03)       <target dev='hdc' bus='scsi'/>
	I1105 18:05:21.884237   27131 main.go:141] libmachine: (ha-844661-m03)       <readonly/>
	I1105 18:05:21.884245   27131 main.go:141] libmachine: (ha-844661-m03)     </disk>
	I1105 18:05:21.884252   27131 main.go:141] libmachine: (ha-844661-m03)     <disk type='file' device='disk'>
	I1105 18:05:21.884260   27131 main.go:141] libmachine: (ha-844661-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:05:21.884267   27131 main.go:141] libmachine: (ha-844661-m03)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/ha-844661-m03.rawdisk'/>
	I1105 18:05:21.884274   27131 main.go:141] libmachine: (ha-844661-m03)       <target dev='hda' bus='virtio'/>
	I1105 18:05:21.884279   27131 main.go:141] libmachine: (ha-844661-m03)     </disk>
	I1105 18:05:21.884289   27131 main.go:141] libmachine: (ha-844661-m03)     <interface type='network'>
	I1105 18:05:21.884295   27131 main.go:141] libmachine: (ha-844661-m03)       <source network='mk-ha-844661'/>
	I1105 18:05:21.884305   27131 main.go:141] libmachine: (ha-844661-m03)       <model type='virtio'/>
	I1105 18:05:21.884313   27131 main.go:141] libmachine: (ha-844661-m03)     </interface>
	I1105 18:05:21.884318   27131 main.go:141] libmachine: (ha-844661-m03)     <interface type='network'>
	I1105 18:05:21.884326   27131 main.go:141] libmachine: (ha-844661-m03)       <source network='default'/>
	I1105 18:05:21.884330   27131 main.go:141] libmachine: (ha-844661-m03)       <model type='virtio'/>
	I1105 18:05:21.884337   27131 main.go:141] libmachine: (ha-844661-m03)     </interface>
	I1105 18:05:21.884341   27131 main.go:141] libmachine: (ha-844661-m03)     <serial type='pty'>
	I1105 18:05:21.884347   27131 main.go:141] libmachine: (ha-844661-m03)       <target port='0'/>
	I1105 18:05:21.884351   27131 main.go:141] libmachine: (ha-844661-m03)     </serial>
	I1105 18:05:21.884358   27131 main.go:141] libmachine: (ha-844661-m03)     <console type='pty'>
	I1105 18:05:21.884363   27131 main.go:141] libmachine: (ha-844661-m03)       <target type='serial' port='0'/>
	I1105 18:05:21.884377   27131 main.go:141] libmachine: (ha-844661-m03)     </console>
	I1105 18:05:21.884395   27131 main.go:141] libmachine: (ha-844661-m03)     <rng model='virtio'>
	I1105 18:05:21.884408   27131 main.go:141] libmachine: (ha-844661-m03)       <backend model='random'>/dev/random</backend>
	I1105 18:05:21.884417   27131 main.go:141] libmachine: (ha-844661-m03)     </rng>
	I1105 18:05:21.884432   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884441   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884448   27131 main.go:141] libmachine: (ha-844661-m03)   </devices>
	I1105 18:05:21.884457   27131 main.go:141] libmachine: (ha-844661-m03) </domain>
	I1105 18:05:21.884464   27131 main.go:141] libmachine: (ha-844661-m03) 
	I1105 18:05:21.890775   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:13:05:59 in network default
	I1105 18:05:21.891360   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring networks are active...
	I1105 18:05:21.891380   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:21.892107   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring network default is active
	I1105 18:05:21.892388   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring network mk-ha-844661 is active
	I1105 18:05:21.892764   27131 main.go:141] libmachine: (ha-844661-m03) Getting domain xml...
	I1105 18:05:21.893494   27131 main.go:141] libmachine: (ha-844661-m03) Creating domain...
	I1105 18:05:23.118308   27131 main.go:141] libmachine: (ha-844661-m03) Waiting to get IP...
	I1105 18:05:23.119070   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.119438   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.119465   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.119424   27902 retry.go:31] will retry after 298.334175ms: waiting for machine to come up
	I1105 18:05:23.419032   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.419605   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.419622   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.419554   27902 retry.go:31] will retry after 273.113851ms: waiting for machine to come up
	I1105 18:05:23.693944   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.694349   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.694376   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.694317   27902 retry.go:31] will retry after 416.726009ms: waiting for machine to come up
	I1105 18:05:24.112851   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:24.113218   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:24.113249   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:24.113181   27902 retry.go:31] will retry after 551.953216ms: waiting for machine to come up
	I1105 18:05:24.666824   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:24.667304   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:24.667333   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:24.667249   27902 retry.go:31] will retry after 466.975145ms: waiting for machine to come up
	I1105 18:05:25.135836   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:25.136271   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:25.136292   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:25.136228   27902 retry.go:31] will retry after 589.586585ms: waiting for machine to come up
	I1105 18:05:25.726897   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:25.727480   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:25.727508   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:25.727434   27902 retry.go:31] will retry after 968.18251ms: waiting for machine to come up
	I1105 18:05:26.697257   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:26.697626   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:26.697652   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:26.697586   27902 retry.go:31] will retry after 1.127611463s: waiting for machine to come up
	I1105 18:05:27.826904   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:27.827312   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:27.827340   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:27.827258   27902 retry.go:31] will retry after 1.342205842s: waiting for machine to come up
	I1105 18:05:29.171618   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:29.172079   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:29.172146   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:29.172073   27902 retry.go:31] will retry after 1.974625708s: waiting for machine to come up
	I1105 18:05:31.148071   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:31.148482   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:31.148499   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:31.148434   27902 retry.go:31] will retry after 2.71055754s: waiting for machine to come up
	I1105 18:05:33.861975   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:33.862458   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:33.862483   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:33.862417   27902 retry.go:31] will retry after 3.509037885s: waiting for machine to come up
	I1105 18:05:37.373198   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:37.373748   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:37.373778   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:37.373690   27902 retry.go:31] will retry after 4.502442692s: waiting for machine to come up
	I1105 18:05:41.878135   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.878636   27131 main.go:141] libmachine: (ha-844661-m03) Found IP for machine: 192.168.39.52
	I1105 18:05:41.878665   27131 main.go:141] libmachine: (ha-844661-m03) Reserving static IP address...
	I1105 18:05:41.878678   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has current primary IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.879129   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find host DHCP lease matching {name: "ha-844661-m03", mac: "52:54:00:62:70:0e", ip: "192.168.39.52"} in network mk-ha-844661
	I1105 18:05:41.955281   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Getting to WaitForSSH function...
	I1105 18:05:41.955317   27131 main.go:141] libmachine: (ha-844661-m03) Reserved static IP address: 192.168.39.52
	I1105 18:05:41.955331   27131 main.go:141] libmachine: (ha-844661-m03) Waiting for SSH to be available...
	I1105 18:05:41.957358   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.957752   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:41.957781   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.957992   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using SSH client type: external
	I1105 18:05:41.958020   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa (-rw-------)
	I1105 18:05:41.958098   27131 main.go:141] libmachine: (ha-844661-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:05:41.958121   27131 main.go:141] libmachine: (ha-844661-m03) DBG | About to run SSH command:
	I1105 18:05:41.958159   27131 main.go:141] libmachine: (ha-844661-m03) DBG | exit 0
	I1105 18:05:42.086743   27131 main.go:141] libmachine: (ha-844661-m03) DBG | SSH cmd err, output: <nil>: 
	I1105 18:05:42.087041   27131 main.go:141] libmachine: (ha-844661-m03) KVM machine creation complete!
	I1105 18:05:42.087332   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:42.087854   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:42.088045   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:42.088232   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:05:42.088247   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetState
	I1105 18:05:42.089254   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:05:42.089266   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:05:42.089278   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:05:42.089283   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.091449   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.091761   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.091789   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.091901   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.092048   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.092179   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.092313   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.092495   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.092748   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.092763   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:05:42.206064   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:05:42.206086   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:05:42.206094   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.208351   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.208732   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.208750   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.208928   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.209072   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.209271   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.209444   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.209598   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.209769   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.209780   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:05:42.323709   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:05:42.323865   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:05:42.323878   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:05:42.323888   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.324267   27131 buildroot.go:166] provisioning hostname "ha-844661-m03"
	I1105 18:05:42.324297   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.324481   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.327505   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.327833   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.327862   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.328041   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.328248   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.328422   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.328544   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.328776   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.329027   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.329041   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661-m03 && echo "ha-844661-m03" | sudo tee /etc/hostname
	I1105 18:05:42.457338   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661-m03
	
	I1105 18:05:42.457368   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.460928   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.461292   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.461321   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.461510   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.461681   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.461835   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.461969   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.462135   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.462324   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.462348   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:05:42.583532   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:05:42.583564   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:05:42.583578   27131 buildroot.go:174] setting up certificates
	I1105 18:05:42.583593   27131 provision.go:84] configureAuth start
	I1105 18:05:42.583602   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.583890   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:42.586719   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.587067   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.587099   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.587290   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.589736   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.590192   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.590227   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.590360   27131 provision.go:143] copyHostCerts
	I1105 18:05:42.590408   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:05:42.590449   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:05:42.590459   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:05:42.590538   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:05:42.590622   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:05:42.590645   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:05:42.590652   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:05:42.590675   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:05:42.590726   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:05:42.590742   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:05:42.590748   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:05:42.590768   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:05:42.590820   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661-m03 san=[127.0.0.1 192.168.39.52 ha-844661-m03 localhost minikube]
	I1105 18:05:42.925752   27131 provision.go:177] copyRemoteCerts
	I1105 18:05:42.925808   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:05:42.925833   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.928689   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.929066   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.929101   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.929303   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.929489   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.929666   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.929803   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.020278   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:05:43.020356   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:05:43.044012   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:05:43.044085   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:05:43.067535   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:05:43.067615   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:05:43.091055   27131 provision.go:87] duration metric: took 507.451446ms to configureAuth
	I1105 18:05:43.091084   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:05:43.091353   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:43.091482   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.094765   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.095169   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.095193   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.095384   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.095574   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.095740   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.095896   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.096067   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:43.096263   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:43.096284   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:05:43.325666   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:05:43.325693   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:05:43.325711   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetURL
	I1105 18:05:43.326946   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using libvirt version 6000000
	I1105 18:05:43.329691   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.330121   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.330146   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.330327   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:05:43.330347   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:05:43.330356   27131 client.go:171] duration metric: took 21.769352405s to LocalClient.Create
	I1105 18:05:43.330393   27131 start.go:167] duration metric: took 21.769425686s to libmachine.API.Create "ha-844661"
	I1105 18:05:43.330407   27131 start.go:293] postStartSetup for "ha-844661-m03" (driver="kvm2")
	I1105 18:05:43.330422   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:05:43.330439   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.330671   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:05:43.330693   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.332887   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.333189   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.333218   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.333427   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.333597   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.333764   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.333891   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.421747   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:05:43.425946   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:05:43.425980   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:05:43.426048   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:05:43.426118   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:05:43.426127   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:05:43.426241   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:05:43.436295   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:05:43.461822   27131 start.go:296] duration metric: took 131.400624ms for postStartSetup
	I1105 18:05:43.461911   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:43.462559   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:43.465039   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.465395   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.465419   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.465660   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:43.465861   27131 start.go:128] duration metric: took 21.923641121s to createHost
	I1105 18:05:43.465891   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.468236   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.468751   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.468776   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.468993   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.469148   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.469288   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.469410   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.469542   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:43.469719   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:43.469729   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:05:43.583301   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829943.559053309
	
	I1105 18:05:43.583330   27131 fix.go:216] guest clock: 1730829943.559053309
	I1105 18:05:43.583338   27131 fix.go:229] Guest: 2024-11-05 18:05:43.559053309 +0000 UTC Remote: 2024-11-05 18:05:43.465876826 +0000 UTC m=+142.850569806 (delta=93.176483ms)
	I1105 18:05:43.583357   27131 fix.go:200] guest clock delta is within tolerance: 93.176483ms
	I1105 18:05:43.583365   27131 start.go:83] releasing machines lock for "ha-844661-m03", held for 22.041249603s
	I1105 18:05:43.583392   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.583670   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:43.586387   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.586835   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.586865   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.589174   27131 out.go:177] * Found network options:
	I1105 18:05:43.590513   27131 out.go:177]   - NO_PROXY=192.168.39.48,192.168.39.38
	W1105 18:05:43.591696   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:05:43.591728   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:05:43.591742   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592264   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592439   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592540   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:05:43.592583   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	W1105 18:05:43.592659   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:05:43.592686   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:05:43.592773   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:05:43.592798   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.595358   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595711   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.595743   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595763   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595936   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.596109   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.596235   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.596238   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.596260   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.596402   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.596401   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.596521   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.596667   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.596795   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.836071   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:05:43.841664   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:05:43.841742   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:05:43.858022   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:05:43.858050   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:05:43.858129   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:05:43.874613   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:05:43.888461   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:05:43.888526   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:05:43.901586   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:05:43.914516   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:05:44.022716   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:05:44.162802   27131 docker.go:233] disabling docker service ...
	I1105 18:05:44.162867   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:05:44.178520   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:05:44.190518   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:05:44.307326   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:05:44.440411   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:05:44.453238   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:05:44.471519   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:05:44.471573   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.481424   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:05:44.481492   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.491154   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.500794   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.511947   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:05:44.521660   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.531075   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.547126   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.557037   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:05:44.565707   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:05:44.565772   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:05:44.580225   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:05:44.590720   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:05:44.720733   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:05:44.813635   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:05:44.813712   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:05:44.818398   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:05:44.818453   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:05:44.821924   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:05:44.862340   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:05:44.862414   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:05:44.888088   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:05:44.915450   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:05:44.916959   27131 out.go:177]   - env NO_PROXY=192.168.39.48
	I1105 18:05:44.918290   27131 out.go:177]   - env NO_PROXY=192.168.39.48,192.168.39.38
	I1105 18:05:44.919504   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:44.921870   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:44.922342   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:44.922369   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:44.922579   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:05:44.926550   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:05:44.938321   27131 mustload.go:65] Loading cluster: ha-844661
	I1105 18:05:44.938602   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:44.939019   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:44.939070   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:44.954536   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45821
	I1105 18:05:44.955060   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:44.955556   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:44.955581   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:44.955872   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:44.956050   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:05:44.957611   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:05:44.957920   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:44.957971   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:44.973679   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33387
	I1105 18:05:44.974166   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:44.974646   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:44.974660   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:44.974951   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:44.975198   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:05:44.975390   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.52
	I1105 18:05:44.975402   27131 certs.go:194] generating shared ca certs ...
	I1105 18:05:44.975424   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:44.975543   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:05:44.975579   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:05:44.975587   27131 certs.go:256] generating profile certs ...
	I1105 18:05:44.975659   27131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:05:44.975685   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b
	I1105 18:05:44.975700   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.52 192.168.39.254]
	I1105 18:05:45.201266   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b ...
	I1105 18:05:45.201297   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b: {Name:mk528e0260fc30831e80a622836a2ff38ea38838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:45.201463   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b ...
	I1105 18:05:45.201476   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b: {Name:mkf6f5a9f3c5c5cd5e56be42a7f99d1f16c92ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:45.201544   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:05:45.201685   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:05:45.201845   27131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:05:45.201861   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:05:45.201877   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:05:45.201896   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:05:45.201914   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:05:45.201928   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:05:45.201942   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:05:45.201954   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:05:45.215059   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:05:45.215144   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:05:45.215186   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:05:45.215194   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:05:45.215215   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:05:45.215240   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:05:45.215272   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:05:45.215314   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:05:45.215350   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.215374   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.215398   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.215435   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:05:45.218425   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:45.218874   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:05:45.218901   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:45.219093   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:05:45.219284   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:05:45.219433   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:05:45.219544   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:05:45.291312   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:05:45.296113   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:05:45.309256   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:05:45.313268   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1105 18:05:45.324891   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:05:45.328601   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:05:45.339115   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:05:45.343326   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:05:45.353973   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:05:45.357652   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:05:45.367881   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:05:45.371920   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 18:05:45.381431   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:05:45.405521   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:05:45.428099   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:05:45.450896   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:05:45.472444   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1105 18:05:45.494567   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:05:45.518941   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:05:45.542679   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:05:45.565272   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:05:45.586847   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:05:45.609171   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:05:45.631071   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:05:45.647046   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1105 18:05:45.662643   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:05:45.677589   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:05:45.693263   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:05:45.708513   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 18:05:45.723904   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:05:45.739595   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:05:45.744988   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:05:45.754754   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.759038   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.759097   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.764843   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:05:45.774526   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:05:45.784026   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.788019   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.788066   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.793328   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:05:45.803282   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:05:45.813203   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.817364   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.817407   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.822692   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:05:45.832731   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:05:45.836652   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:05:45.836705   27131 kubeadm.go:934] updating node {m03 192.168.39.52 8443 v1.31.2 crio true true} ...
	I1105 18:05:45.836816   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:05:45.836851   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:05:45.836896   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:05:45.851973   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:05:45.852033   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:05:45.852095   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:05:45.861565   27131 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 18:05:45.861624   27131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 18:05:45.871179   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1105 18:05:45.871192   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 18:05:45.871218   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:05:45.871230   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:05:45.871246   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1105 18:05:45.871262   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:05:45.871285   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:05:45.871314   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:05:45.885118   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:05:45.885168   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 18:05:45.885198   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 18:05:45.885198   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 18:05:45.885201   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:05:45.885224   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 18:05:45.895722   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 18:05:45.895762   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 18:05:46.776289   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:05:46.785676   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1105 18:05:46.804664   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:05:46.823256   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:05:46.839659   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:05:46.843739   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:05:46.855127   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:05:46.984151   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:05:47.002930   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:05:47.003372   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:47.003427   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:47.019365   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I1105 18:05:47.020121   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:47.020574   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:47.020595   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:47.020908   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:47.021095   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:05:47.021355   27131 start.go:317] joinCluster: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:05:47.021508   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 18:05:47.021529   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:05:47.024802   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:47.025266   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:05:47.025301   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:47.025485   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:05:47.025649   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:05:47.025818   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:05:47.025989   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:05:47.187808   27131 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:05:47.187862   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ywlsrk.n1qe1uf11bwul667 --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m03 --control-plane --apiserver-advertise-address=192.168.39.52 --apiserver-bind-port=8443"
	I1105 18:06:08.756523   27131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ywlsrk.n1qe1uf11bwul667 --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m03 --control-plane --apiserver-advertise-address=192.168.39.52 --apiserver-bind-port=8443": (21.568638959s)
	I1105 18:06:08.756554   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 18:06:09.321152   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661-m03 minikube.k8s.io/updated_at=2024_11_05T18_06_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=false
	I1105 18:06:09.429932   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844661-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 18:06:09.553648   27131 start.go:319] duration metric: took 22.532294884s to joinCluster
	I1105 18:06:09.553756   27131 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:06:09.554141   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:09.555396   27131 out.go:177] * Verifying Kubernetes components...
	I1105 18:06:09.556678   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:06:09.771512   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:06:09.788145   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:06:09.788384   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:06:09.788445   27131 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.48:8443
	I1105 18:06:09.788700   27131 node_ready.go:35] waiting up to 6m0s for node "ha-844661-m03" to be "Ready" ...
	I1105 18:06:09.788799   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:09.788806   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:09.788814   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:09.788817   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:09.792219   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:10.289451   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:10.289477   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:10.289489   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:10.289494   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:10.292860   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:10.789577   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:10.789602   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:10.789615   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:10.789623   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:10.793572   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.289465   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:11.289484   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:11.289492   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:11.289498   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:11.292734   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.789023   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:11.789052   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:11.789064   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:11.789070   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:11.792248   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.792884   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:12.289577   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:12.289596   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:12.289604   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:12.289609   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:12.292931   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:12.789594   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:12.789615   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:12.789623   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:12.789628   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:12.793282   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.288880   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:13.288900   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:13.288909   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:13.288912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:13.292354   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.789203   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:13.789228   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:13.789240   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:13.789244   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:13.792591   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.793128   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:14.289574   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:14.289596   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:14.289605   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:14.289610   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:14.292856   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:14.789821   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:14.789847   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:14.789858   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:14.789863   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:14.793134   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.289398   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:15.289420   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:15.289428   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:15.289433   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:15.292967   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.789567   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:15.789591   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:15.789602   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:15.789607   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:15.793208   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.793657   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:16.289022   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:16.289046   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:16.289056   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:16.289062   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:16.309335   27131 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1105 18:06:16.789461   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:16.789479   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:16.789488   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:16.789492   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:16.793000   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:17.289308   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:17.289333   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:17.289345   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:17.289354   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:17.292729   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:17.789752   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:17.789779   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:17.789791   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:17.789798   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:17.794196   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:17.794657   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:18.288931   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:18.288964   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:18.288972   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:18.288976   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:18.292090   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:18.789058   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:18.789080   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:18.789086   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:18.789090   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:18.792559   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:19.289923   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:19.289950   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:19.289961   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:19.289966   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:19.293279   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:19.789125   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:19.789153   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:19.789164   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:19.789170   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:19.792732   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:20.289126   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:20.289149   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:20.289157   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:20.289162   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:20.292641   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:20.293309   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:20.789527   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:20.789549   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:20.789557   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:20.789561   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:20.792849   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:21.289833   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:21.289856   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:21.289863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:21.289867   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:21.293665   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:21.789877   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:21.789900   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:21.789908   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:21.789912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:21.793341   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:22.289645   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:22.289664   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:22.289672   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:22.289676   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:22.292986   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:22.293503   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:22.789122   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:22.789148   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:22.789160   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:22.789164   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:22.792397   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:23.289550   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:23.289574   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:23.289584   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:23.289591   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:23.293009   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:23.789081   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:23.789104   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:23.789112   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:23.789116   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:23.792559   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:24.289408   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:24.289432   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:24.289444   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:24.289448   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:24.293655   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:24.294170   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:24.789552   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:24.789579   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:24.789592   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:24.789598   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:24.792779   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:25.289364   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:25.289386   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:25.289393   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:25.289398   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:25.293189   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:25.789622   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:25.789644   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:25.789652   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:25.789655   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:25.792920   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.288919   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:26.288944   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:26.288954   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:26.288961   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:26.292248   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.789720   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:26.789741   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:26.789749   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:26.789753   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:26.793339   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.793840   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:27.289627   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:27.289653   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:27.289664   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:27.289671   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:27.292896   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:27.789396   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:27.789418   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:27.789426   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:27.789430   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:27.793104   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.288926   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.288950   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.288958   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.288962   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.292349   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.292934   27131 node_ready.go:49] node "ha-844661-m03" has status "Ready":"True"
	I1105 18:06:28.292959   27131 node_ready.go:38] duration metric: took 18.504244816s for node "ha-844661-m03" to be "Ready" ...
	I1105 18:06:28.292967   27131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:28.293052   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:28.293062   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.293069   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.293073   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.298865   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:06:28.305101   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.305172   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4bdfz
	I1105 18:06:28.305180   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.305187   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.305191   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.308014   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.308823   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.308838   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.308845   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.308848   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.311202   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.311752   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.311769   27131 pod_ready.go:82] duration metric: took 6.646273ms for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.311778   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.311825   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5g97
	I1105 18:06:28.311833   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.311839   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.311842   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.314162   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.315006   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.315022   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.315032   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.315037   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.317112   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.317771   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.317790   27131 pod_ready.go:82] duration metric: took 6.006174ms for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.317799   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.317847   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661
	I1105 18:06:28.317855   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.317861   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.317869   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.320184   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.320779   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.320794   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.320801   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.320804   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.323022   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.323542   27131 pod_ready.go:93] pod "etcd-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.323560   27131 pod_ready.go:82] duration metric: took 5.754386ms for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.323568   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.323613   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m02
	I1105 18:06:28.323621   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.323627   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.323631   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.325924   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.326482   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:28.326496   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.326503   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.326510   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.328928   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.329392   27131 pod_ready.go:93] pod "etcd-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.329412   27131 pod_ready.go:82] duration metric: took 5.837481ms for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.329426   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.489824   27131 request.go:632] Waited for 160.309715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m03
	I1105 18:06:28.489893   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m03
	I1105 18:06:28.489899   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.489906   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.489914   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.493239   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.689345   27131 request.go:632] Waited for 195.357359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.689416   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.689422   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.689430   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.689436   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.692948   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.693449   27131 pod_ready.go:93] pod "etcd-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.693468   27131 pod_ready.go:82] duration metric: took 364.031884ms for pod "etcd-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.693488   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.889759   27131 request.go:632] Waited for 196.181442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:06:28.889818   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:06:28.889823   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.889830   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.889836   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.893294   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.089232   27131 request.go:632] Waited for 195.272157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:29.089332   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:29.089345   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.089355   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.089363   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.092371   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:29.093062   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.093081   27131 pod_ready.go:82] duration metric: took 399.581249ms for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.093095   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.289039   27131 request.go:632] Waited for 195.870378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:06:29.289108   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:06:29.289114   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.289121   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.289127   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.292782   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.489337   27131 request.go:632] Waited for 195.348089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:29.489423   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:29.489428   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.489439   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.489446   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.492721   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.493290   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.493309   27131 pod_ready.go:82] duration metric: took 400.203815ms for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.493320   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.689371   27131 request.go:632] Waited for 195.98498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m03
	I1105 18:06:29.689467   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m03
	I1105 18:06:29.689479   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.689489   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.689497   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.692955   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.888986   27131 request.go:632] Waited for 195.295088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:29.889053   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:29.889060   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.889071   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.889080   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.892048   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:29.892533   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.892549   27131 pod_ready.go:82] duration metric: took 399.221552ms for pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.892559   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.089669   27131 request.go:632] Waited for 197.039051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:06:30.089731   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:06:30.089736   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.089745   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.089749   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.093164   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.289306   27131 request.go:632] Waited for 195.324188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:30.289372   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:30.289384   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.289397   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.289407   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.292636   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.293206   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:30.293227   27131 pod_ready.go:82] duration metric: took 400.66121ms for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.293238   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.489536   27131 request.go:632] Waited for 196.217205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:06:30.489646   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:06:30.489658   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.489668   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.489675   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.493045   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.688919   27131 request.go:632] Waited for 195.135908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:30.688971   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:30.688976   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.688984   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.688988   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.692203   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.692968   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:30.692987   27131 pod_ready.go:82] duration metric: took 399.741193ms for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.693001   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.889370   27131 request.go:632] Waited for 196.304824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m03
	I1105 18:06:30.889450   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m03
	I1105 18:06:30.889457   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.889465   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.889472   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.892647   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.089803   27131 request.go:632] Waited for 196.376037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.089851   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.089855   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.089863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.089869   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.093035   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.093548   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.093568   27131 pod_ready.go:82] duration metric: took 400.558908ms for pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.093580   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mk9m" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.289696   27131 request.go:632] Waited for 196.055175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mk9m
	I1105 18:06:31.289756   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mk9m
	I1105 18:06:31.289761   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.289768   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.289772   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.293304   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.489478   27131 request.go:632] Waited for 195.351968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.489541   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.489549   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.489556   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.489562   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.492991   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.493563   27131 pod_ready.go:93] pod "kube-proxy-2mk9m" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.493582   27131 pod_ready.go:82] duration metric: took 399.995731ms for pod "kube-proxy-2mk9m" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.493592   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.689978   27131 request.go:632] Waited for 196.300604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:06:31.690038   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:06:31.690043   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.690050   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.690053   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.693380   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.889851   27131 request.go:632] Waited for 195.375559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:31.889905   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:31.889910   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.889917   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.889922   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.893474   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.894113   27131 pod_ready.go:93] pod "kube-proxy-pjpkh" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.894132   27131 pod_ready.go:82] duration metric: took 400.533639ms for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.894142   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.089665   27131 request.go:632] Waited for 195.450073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:06:32.089735   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:06:32.089740   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.089747   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.089751   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.093190   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.289235   27131 request.go:632] Waited for 195.339549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:32.289293   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:32.289310   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.289317   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.289321   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.292485   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.293147   27131 pod_ready.go:93] pod "kube-proxy-zsbfs" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:32.293172   27131 pod_ready.go:82] duration metric: took 399.02399ms for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.293182   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.489243   27131 request.go:632] Waited for 195.995375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:06:32.489308   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:06:32.489316   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.489324   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.489327   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.493003   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.689901   27131 request.go:632] Waited for 196.356448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:32.689953   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:32.689958   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.689966   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.689970   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.693190   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.693742   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:32.693763   27131 pod_ready.go:82] duration metric: took 400.573652ms for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.693777   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.889556   27131 request.go:632] Waited for 195.689425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:06:32.889607   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:06:32.889612   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.889620   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.889624   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.893476   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.089475   27131 request.go:632] Waited for 195.357977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:33.089527   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:33.089532   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.089539   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.089543   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.092888   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.093460   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:33.093481   27131 pod_ready.go:82] duration metric: took 399.697128ms for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.093491   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.289500   27131 request.go:632] Waited for 195.942997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m03
	I1105 18:06:33.289569   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m03
	I1105 18:06:33.289576   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.289585   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.289589   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.293636   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:33.489851   27131 request.go:632] Waited for 195.367744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:33.489908   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:33.489913   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.489920   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.489924   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.493512   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.494235   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:33.494258   27131 pod_ready.go:82] duration metric: took 400.759685ms for pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.494276   27131 pod_ready.go:39] duration metric: took 5.201298893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:33.494295   27131 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:06:33.494356   27131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:06:33.509380   27131 api_server.go:72] duration metric: took 23.955584698s to wait for apiserver process to appear ...
	I1105 18:06:33.509409   27131 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:06:33.509433   27131 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1105 18:06:33.514022   27131 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1105 18:06:33.514097   27131 round_trippers.go:463] GET https://192.168.39.48:8443/version
	I1105 18:06:33.514107   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.514114   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.514119   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.514958   27131 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 18:06:33.515041   27131 api_server.go:141] control plane version: v1.31.2
	I1105 18:06:33.515056   27131 api_server.go:131] duration metric: took 5.640397ms to wait for apiserver health ...
	I1105 18:06:33.515062   27131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:06:33.689459   27131 request.go:632] Waited for 174.322152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:33.689543   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:33.689554   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.689564   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.689570   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.695696   27131 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:06:33.701785   27131 system_pods.go:59] 24 kube-system pods found
	I1105 18:06:33.701817   27131 system_pods.go:61] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:06:33.701822   27131 system_pods.go:61] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:06:33.701826   27131 system_pods.go:61] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:06:33.701829   27131 system_pods.go:61] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:06:33.701832   27131 system_pods.go:61] "etcd-ha-844661-m03" [c8179289-e67f-4a2b-bba3-1387aa107d3e] Running
	I1105 18:06:33.701836   27131 system_pods.go:61] "kindnet-fzrh6" [985ef0b3-91cc-4965-a1f3-a8e468eba2ee] Running
	I1105 18:06:33.701839   27131 system_pods.go:61] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:06:33.701842   27131 system_pods.go:61] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:06:33.701845   27131 system_pods.go:61] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:06:33.701849   27131 system_pods.go:61] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:06:33.701852   27131 system_pods.go:61] "kube-apiserver-ha-844661-m03" [57a94b5d-466e-4d87-ba16-ceba58d65ee0] Running
	I1105 18:06:33.701858   27131 system_pods.go:61] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:06:33.701864   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:06:33.701868   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m03" [dcadcdf5-6004-49a9-800b-f27798ab06db] Running
	I1105 18:06:33.701872   27131 system_pods.go:61] "kube-proxy-2mk9m" [483f248e-9776-4c11-a088-a2cbd152602b] Running
	I1105 18:06:33.701875   27131 system_pods.go:61] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:06:33.701879   27131 system_pods.go:61] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:06:33.701882   27131 system_pods.go:61] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:06:33.701886   27131 system_pods.go:61] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:06:33.701889   27131 system_pods.go:61] "kube-scheduler-ha-844661-m03" [711f353f-ee82-4066-98ff-e3349082bf32] Running
	I1105 18:06:33.701894   27131 system_pods.go:61] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:06:33.701897   27131 system_pods.go:61] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:06:33.701900   27131 system_pods.go:61] "kube-vip-ha-844661-m03" [5ebe3d8b-e1e2-4d10-bf5c-d88148144dd1] Running
	I1105 18:06:33.701903   27131 system_pods.go:61] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:06:33.701909   27131 system_pods.go:74] duration metric: took 186.841773ms to wait for pod list to return data ...
	I1105 18:06:33.701919   27131 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:06:33.889363   27131 request.go:632] Waited for 187.358199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:06:33.889435   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:06:33.889442   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.889452   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.889459   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.893683   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:33.893791   27131 default_sa.go:45] found service account: "default"
	I1105 18:06:33.893804   27131 default_sa.go:55] duration metric: took 191.879725ms for default service account to be created ...
	I1105 18:06:33.893811   27131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:06:34.089215   27131 request.go:632] Waited for 195.345636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:34.089283   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:34.089291   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:34.089303   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:34.089323   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:34.096363   27131 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:06:34.102465   27131 system_pods.go:86] 24 kube-system pods found
	I1105 18:06:34.102491   27131 system_pods.go:89] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:06:34.102496   27131 system_pods.go:89] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:06:34.102501   27131 system_pods.go:89] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:06:34.102505   27131 system_pods.go:89] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:06:34.102508   27131 system_pods.go:89] "etcd-ha-844661-m03" [c8179289-e67f-4a2b-bba3-1387aa107d3e] Running
	I1105 18:06:34.102512   27131 system_pods.go:89] "kindnet-fzrh6" [985ef0b3-91cc-4965-a1f3-a8e468eba2ee] Running
	I1105 18:06:34.102515   27131 system_pods.go:89] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:06:34.102519   27131 system_pods.go:89] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:06:34.102522   27131 system_pods.go:89] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:06:34.102525   27131 system_pods.go:89] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:06:34.102529   27131 system_pods.go:89] "kube-apiserver-ha-844661-m03" [57a94b5d-466e-4d87-ba16-ceba58d65ee0] Running
	I1105 18:06:34.102533   27131 system_pods.go:89] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:06:34.102537   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:06:34.102541   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m03" [dcadcdf5-6004-49a9-800b-f27798ab06db] Running
	I1105 18:06:34.102545   27131 system_pods.go:89] "kube-proxy-2mk9m" [483f248e-9776-4c11-a088-a2cbd152602b] Running
	I1105 18:06:34.102551   27131 system_pods.go:89] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:06:34.102554   27131 system_pods.go:89] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:06:34.102557   27131 system_pods.go:89] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:06:34.102561   27131 system_pods.go:89] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:06:34.102564   27131 system_pods.go:89] "kube-scheduler-ha-844661-m03" [711f353f-ee82-4066-98ff-e3349082bf32] Running
	I1105 18:06:34.102569   27131 system_pods.go:89] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:06:34.102573   27131 system_pods.go:89] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:06:34.102578   27131 system_pods.go:89] "kube-vip-ha-844661-m03" [5ebe3d8b-e1e2-4d10-bf5c-d88148144dd1] Running
	I1105 18:06:34.102581   27131 system_pods.go:89] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:06:34.102586   27131 system_pods.go:126] duration metric: took 208.77013ms to wait for k8s-apps to be running ...
	I1105 18:06:34.102595   27131 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:06:34.102637   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:06:34.118557   27131 system_svc.go:56] duration metric: took 15.951864ms WaitForService to wait for kubelet
	I1105 18:06:34.118583   27131 kubeadm.go:582] duration metric: took 24.564791625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:06:34.118612   27131 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:06:34.288972   27131 request.go:632] Waited for 170.274451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes
	I1105 18:06:34.289022   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes
	I1105 18:06:34.289035   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:34.289055   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:34.289062   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:34.292646   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:34.294249   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294283   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294309   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294316   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294322   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294327   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294335   27131 node_conditions.go:105] duration metric: took 175.714114ms to run NodePressure ...
	I1105 18:06:34.294352   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:06:34.294390   27131 start.go:255] writing updated cluster config ...
	I1105 18:06:34.294711   27131 ssh_runner.go:195] Run: rm -f paused
	I1105 18:06:34.347073   27131 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 18:06:34.348891   27131 out.go:177] * Done! kubectl is now configured to use "ha-844661" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.432688298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830213432664598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a2ca8a5-a2d0-4401-8363-1858d2aa52eb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.433124923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81fddc57-eda1-4cd1-bcd1-ec87fe868cd7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.433220403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81fddc57-eda1-4cd1-bcd1-ec87fe868cd7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.433455022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81fddc57-eda1-4cd1-bcd1-ec87fe868cd7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.469804256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0298169-eaa2-4e23-9ef4-0dbd1a9856f0 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.469888511Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0298169-eaa2-4e23-9ef4-0dbd1a9856f0 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.471377853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74fc9d6f-4324-4f7b-b713-69cc030609cc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.471905924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830213471880973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74fc9d6f-4324-4f7b-b713-69cc030609cc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.472459354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a48c26d-782e-4703-9c80-406f448d7077 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.472506987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a48c26d-782e-4703-9c80-406f448d7077 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.472709661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a48c26d-782e-4703-9c80-406f448d7077 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.513944870Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e046abf-21bb-4081-bde1-877d2ed8b0ba name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.514050301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e046abf-21bb-4081-bde1-877d2ed8b0ba name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.515439966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f30db42d-b27f-4057-ba6b-39c37ecc4efe name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.515885128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830213515861962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f30db42d-b27f-4057-ba6b-39c37ecc4efe name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.516424729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15435882-c391-44b6-81f0-609348e06ab4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.516480344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15435882-c391-44b6-81f0-609348e06ab4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.516704394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15435882-c391-44b6-81f0-609348e06ab4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.552868869Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fdcefb0-2f24-46d2-a6b7-d0caa5de22b4 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.552968434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fdcefb0-2f24-46d2-a6b7-d0caa5de22b4 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.554319226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33cd0398-a34b-46ff-9841-f0940a0e898e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.554760360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830213554736846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33cd0398-a34b-46ff-9841-f0940a0e898e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.555581485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bebbd44f-3aac-471e-8a6f-405a3f7da709 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.555635569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bebbd44f-3aac-471e-8a6f-405a3f7da709 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:13 ha-844661 crio[658]: time="2024-11-05 18:10:13.555925431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bebbd44f-3aac-471e-8a6f-405a3f7da709 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f547082b18e22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   27e18ae242703       busybox-7dff88458-lzhpc
	4504233c88e52       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   7b8c6b865e4b8       coredns-7c65d6cfc9-4bdfz
	2c9fc5d833b41       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   44bedf8a84dbf       coredns-7c65d6cfc9-s5g97
	258fd7ae93626       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   b59a04159a4fb       storage-provisioner
	bf77486744a30       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   565a0867a4a3a       kindnet-vz22j
	1c753c07805a4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   a2589ca7aa1a5       kube-proxy-pjpkh
	9fc3970511492       ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f     6 minutes ago       Running             kube-vip                  0                   229c492a7d447       kube-vip-ha-844661
	f06b75f1a2501       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   da4d3442917c5       etcd-ha-844661
	695ba2636aaa9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   45ce87c5b9a86       kube-scheduler-ha-844661
	d6c4df0798539       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   c3cdeb3fb2bc9       kube-apiserver-ha-844661
	9fc529f9c17c8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   8cfef6eeee31d       kube-controller-manager-ha-844661
	
	
	==> coredns [2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a] <==
	[INFO] 10.244.3.2:48122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001817736s
	[INFO] 10.244.1.2:41485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154354s
	[INFO] 10.244.0.4:48696 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00417262s
	[INFO] 10.244.0.4:39724 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011241203s
	[INFO] 10.244.0.4:33801 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201157s
	[INFO] 10.244.3.2:59342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205557s
	[INFO] 10.244.3.2:38358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000335352s
	[INFO] 10.244.3.2:50220 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290051s
	[INFO] 10.244.1.2:42991 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002076706s
	[INFO] 10.244.1.2:38070 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182659s
	[INFO] 10.244.1.2:38061 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120824s
	[INFO] 10.244.0.4:55480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107684s
	[INFO] 10.244.3.2:54459 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094155s
	[INFO] 10.244.3.2:56770 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159318s
	[INFO] 10.244.1.2:46930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145588s
	[INFO] 10.244.1.2:51686 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000234893s
	[INFO] 10.244.1.2:43604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089852s
	[INFO] 10.244.0.4:59908 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00031712s
	[INFO] 10.244.3.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016445s
	[INFO] 10.244.3.2:35219 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306046s
	[INFO] 10.244.3.2:45286 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00016761s
	[INFO] 10.244.1.2:48376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000282486s
	[INFO] 10.244.1.2:44477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097938s
	[INFO] 10.244.1.2:51521 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175252s
	[INFO] 10.244.1.2:42468 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076611s
	
	
	==> coredns [4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8] <==
	[INFO] 10.244.0.4:38561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176278s
	[INFO] 10.244.0.4:47328 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000239279s
	[INFO] 10.244.0.4:37188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002005s
	[INFO] 10.244.0.4:40443 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116158s
	[INFO] 10.244.0.4:39770 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000216794s
	[INFO] 10.244.3.2:58499 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947267s
	[INFO] 10.244.3.2:50696 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001435907s
	[INFO] 10.244.3.2:53598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101366s
	[INFO] 10.244.3.2:40278 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021319s
	[INFO] 10.244.3.2:35533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073855s
	[INFO] 10.244.1.2:57627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215883s
	[INFO] 10.244.1.2:58558 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015092s
	[INFO] 10.244.1.2:44310 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409552s
	[INFO] 10.244.1.2:44445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145932s
	[INFO] 10.244.1.2:53561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124269s
	[INFO] 10.244.0.4:42872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279983s
	[INFO] 10.244.0.4:56987 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127988s
	[INFO] 10.244.0.4:36230 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209676s
	[INFO] 10.244.3.2:59508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020584s
	[INFO] 10.244.3.2:54542 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160368s
	[INFO] 10.244.1.2:52317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136132s
	[INFO] 10.244.0.4:56988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179513s
	[INFO] 10.244.0.4:39632 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000244979s
	[INFO] 10.244.0.4:60960 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110854s
	[INFO] 10.244.3.2:58476 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000304046s
	
	
	==> describe nodes <==
	Name:               ha-844661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T18_03_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:03:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-844661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee44951a983a4e549dbb04cb8a2493c9
	  System UUID:                ee44951a-983a-4e54-9dbb-04cb8a2493c9
	  Boot ID:                    4c65764c-54aa-465a-bc8a-8a5365b789a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lzhpc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-7c65d6cfc9-4bdfz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 coredns-7c65d6cfc9-s5g97             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 etcd-ha-844661                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-vz22j                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-844661             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-844661    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-pjpkh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-844661             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-844661                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m10s  kube-proxy       
	  Normal  Starting                 6m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s  kubelet          Node ha-844661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s  kubelet          Node ha-844661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s  kubelet          Node ha-844661 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	  Normal  NodeReady                5m54s  kubelet          Node ha-844661 status is now: NodeReady
	  Normal  RegisteredNode           5m11s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	  Normal  RegisteredNode           3m59s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	
	
	Name:               ha-844661-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_04_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    ha-844661-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75eddb8895b44c028e3869c19333df27
	  System UUID:                75eddb88-95b4-4c02-8e38-69c19333df27
	  Boot ID:                    703a3f97-42af-45ac-b300-e4714fc82ae4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vkchm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-844661-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m17s
	  kube-system                 kindnet-q898d                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m19s
	  kube-system                 kube-apiserver-ha-844661-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-controller-manager-ha-844661-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-proxy-zsbfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-scheduler-ha-844661-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-vip-ha-844661-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m19s                  cidrAllocator    Node ha-844661-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-844661-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-844661-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node ha-844661-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-844661-m02 status is now: NodeNotReady
	
	
	Name:               ha-844661-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_06_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:06:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    ha-844661-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eaab072d40e24724bda026ac82fdd308
	  System UUID:                eaab072d-40e2-4724-bda0-26ac82fdd308
	  Boot ID:                    db511fc0-c5d5-4348-8360-c6fc1b44808f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mwvv2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-844661-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m5s
	  kube-system                 kindnet-fzrh6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m7s
	  kube-system                 kube-apiserver-ha-844661-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-844661-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-2mk9m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-ha-844661-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-844661-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     4m7s                 cidrAllocator    Node ha-844661-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node ha-844661-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node ha-844661-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node ha-844661-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	
	
	Name:               ha-844661-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_07_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-844661-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9adceb878ab74645bb56707a0ab9854e
	  System UUID:                9adceb87-8ab7-4645-bb56-707a0ab9854e
	  Boot ID:                    0b1794d4-8e9f-4a02-ba93-5010c0d8fbf7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7tcjz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-8bw6z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m55s            kube-proxy       
	  Normal  CIDRAssignmentFailed     3m               cidrAllocator    Node ha-844661-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     3m               cidrAllocator    Node ha-844661-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-844661-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-844661-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-844661-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s            node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  NodeReady                2m40s            kubelet          Node ha-844661-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 5 18:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051370] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036705] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.826003] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.830792] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.518259] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.512732] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.062769] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057746] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.181267] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.115768] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.273995] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.824232] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.167137] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.060834] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.275907] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.079756] kauditd_printk_skb: 79 callbacks suppressed
	[Nov 5 18:04] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.402917] kauditd_printk_skb: 32 callbacks suppressed
	[Nov 5 18:05] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc] <==
	{"level":"warn","ts":"2024-11-05T18:10:13.626212Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.726210Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.768772Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.774704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.826370Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.831962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.840730Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.850017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.854849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.866094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.874804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.881559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.887377Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.891853Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.930601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.934716Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.941104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.948565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.956326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.961332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.965825Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.971392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.985083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:13.994450Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:14.026713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:10:14 up 6 min,  0 users,  load average: 0.28, 0.42, 0.21
	Linux ha-844661 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf] <==
	I1105 18:09:38.981372       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:09:48.981067       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:09:48.981241       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:09:48.981518       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:09:48.981567       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	I1105 18:09:48.981718       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:09:48.981756       1 main.go:301] handling current node
	I1105 18:09:48.981786       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:09:48.981804       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:09:58.979695       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:09:58.979736       1 main.go:301] handling current node
	I1105 18:09:58.979751       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:09:58.979757       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:09:58.979941       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:09:58.979961       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:09:58.980047       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:09:58.980065       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	I1105 18:10:08.975320       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:10:08.975425       1 main.go:301] handling current node
	I1105 18:10:08.975448       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:10:08.975457       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:10:08.975728       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:10:08.975758       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:10:08.975910       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:10:08.975933       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f] <==
	W1105 18:03:56.787950       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.48]
	I1105 18:03:56.789794       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:03:56.795759       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:03:56.988233       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1105 18:03:58.574343       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1105 18:03:58.589042       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1105 18:03:58.611994       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1105 18:04:02.140726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1105 18:04:02.242563       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1105 18:06:39.847316       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39688: use of closed network connection
	E1105 18:06:40.021738       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39706: use of closed network connection
	E1105 18:06:40.204127       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39716: use of closed network connection
	E1105 18:06:40.398615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39728: use of closed network connection
	E1105 18:06:40.573865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39736: use of closed network connection
	E1105 18:06:40.752398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39760: use of closed network connection
	E1105 18:06:40.936783       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39766: use of closed network connection
	E1105 18:06:41.111519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39780: use of closed network connection
	E1105 18:06:41.286054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39802: use of closed network connection
	E1105 18:06:41.573950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39826: use of closed network connection
	E1105 18:06:41.738524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39836: use of closed network connection
	E1105 18:06:41.904845       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39854: use of closed network connection
	E1105 18:06:42.073866       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39862: use of closed network connection
	E1105 18:06:42.246567       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39868: use of closed network connection
	E1105 18:06:42.411961       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39894: use of closed network connection
	W1105 18:08:06.801135       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.48 192.168.39.52]
	
	
	==> kube-controller-manager [9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c] <==
	E1105 18:07:13.653435       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-844661-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-844661-m04"
	E1105 18:07:13.653555       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-844661-m04': failed to patch node CIDR: Node \"ha-844661-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1105 18:07:13.653638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:13.659637       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:13.797662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:14.149565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:14.559123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:16.780529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:16.780718       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-844661-m04"
	I1105 18:07:16.994375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:17.944364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:18.017747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:23.969145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:33.222978       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844661-m04"
	I1105 18:07:33.223667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:33.239449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:34.533989       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:44.277626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:08:29.557990       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844661-m04"
	I1105 18:08:29.558983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:29.585475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:29.697679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.853166ms"
	I1105 18:08:29.699962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.926µs"
	I1105 18:08:31.887524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:34.788426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	
	
	==> kube-proxy [1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:04:03.571824       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:04:03.590655       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E1105 18:04:03.590765       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:04:03.621086       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:04:03.621144       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:04:03.621208       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:04:03.623505       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:04:03.623772       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:04:03.623783       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:04:03.625873       1 config.go:199] "Starting service config controller"
	I1105 18:04:03.625922       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:04:03.625956       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:04:03.625972       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:04:03.628076       1 config.go:328] "Starting node config controller"
	I1105 18:04:03.628108       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:04:03.726043       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:04:03.726043       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:04:03.728252       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab] <==
	E1105 18:03:56.072125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.276682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 18:03:56.276737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.329770       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 18:03:56.329820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.398642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 18:03:56.398687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1105 18:03:57.639067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 18:06:35.211549       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9e352dc6-ed87-4112-85c5-a76c00a8912f" pod="default/busybox-7dff88458-vkchm" assumedNode="ha-844661-m02" currentNode="ha-844661-m03"
	E1105 18:06:35.223911       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vkchm\": pod busybox-7dff88458-vkchm is already assigned to node \"ha-844661-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vkchm" node="ha-844661-m03"
	E1105 18:06:35.226313       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9e352dc6-ed87-4112-85c5-a76c00a8912f(default/busybox-7dff88458-vkchm) was assumed on ha-844661-m03 but assigned to ha-844661-m02" pod="default/busybox-7dff88458-vkchm"
	E1105 18:06:35.226429       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vkchm\": pod busybox-7dff88458-vkchm is already assigned to node \"ha-844661-m02\"" pod="default/busybox-7dff88458-vkchm"
	I1105 18:06:35.226528       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vkchm" node="ha-844661-m02"
	E1105 18:06:35.274759       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lzhpc\": pod busybox-7dff88458-lzhpc is already assigned to node \"ha-844661\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lzhpc" node="ha-844661"
	E1105 18:06:35.275967       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8687b103-4a1a-4529-9efd-46405325fb04(default/busybox-7dff88458-lzhpc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lzhpc"
	E1105 18:06:35.276226       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lzhpc\": pod busybox-7dff88458-lzhpc is already assigned to node \"ha-844661\"" pod="default/busybox-7dff88458-lzhpc"
	I1105 18:06:35.276363       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lzhpc" node="ha-844661"
	E1105 18:07:13.665747       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tfzng\": pod kube-proxy-tfzng is already assigned to node \"ha-844661-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tfzng" node="ha-844661-m04"
	E1105 18:07:13.665825       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f52b30f-7446-45ac-bb36-73398ffbfbc2(kube-system/kube-proxy-tfzng) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tfzng"
	E1105 18:07:13.665842       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tfzng\": pod kube-proxy-tfzng is already assigned to node \"ha-844661-m04\"" pod="kube-system/kube-proxy-tfzng"
	I1105 18:07:13.665872       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tfzng" node="ha-844661-m04"
	E1105 18:07:13.666212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vjq6v\": pod kindnet-vjq6v is already assigned to node \"ha-844661-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vjq6v" node="ha-844661-m04"
	E1105 18:07:13.666376       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d9f2bfec-eb1f-4373-bf3a-414ed6c8a630(kube-system/kindnet-vjq6v) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vjq6v"
	E1105 18:07:13.666420       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vjq6v\": pod kindnet-vjq6v is already assigned to node \"ha-844661-m04\"" pod="kube-system/kindnet-vjq6v"
	I1105 18:07:13.666453       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vjq6v" node="ha-844661-m04"
	
	
	==> kubelet <==
	Nov 05 18:08:58 ha-844661 kubelet[1296]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:08:58 ha-844661 kubelet[1296]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:08:58 ha-844661 kubelet[1296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:08:58 ha-844661 kubelet[1296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:08:58 ha-844661 kubelet[1296]: E1105 18:08:58.595270    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830138594734384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:58 ha-844661 kubelet[1296]: E1105 18:08:58.595295    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830138594734384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:08 ha-844661 kubelet[1296]: E1105 18:09:08.597057    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830148596755320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:08 ha-844661 kubelet[1296]: E1105 18:09:08.597097    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830148596755320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:18 ha-844661 kubelet[1296]: E1105 18:09:18.599471    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830158599122023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:18 ha-844661 kubelet[1296]: E1105 18:09:18.599506    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830158599122023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:28 ha-844661 kubelet[1296]: E1105 18:09:28.601448    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830168600902243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:28 ha-844661 kubelet[1296]: E1105 18:09:28.601554    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830168600902243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:38 ha-844661 kubelet[1296]: E1105 18:09:38.606338    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830178605104359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:38 ha-844661 kubelet[1296]: E1105 18:09:38.606359    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830178605104359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:48 ha-844661 kubelet[1296]: E1105 18:09:48.608274    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830188607885225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:48 ha-844661 kubelet[1296]: E1105 18:09:48.608666    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830188607885225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.519242    1296 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:09:58 ha-844661 kubelet[1296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.611279    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830198610818845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.611302    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830198610818845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:08 ha-844661 kubelet[1296]: E1105 18:10:08.613551    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830208612853413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:08 ha-844661 kubelet[1296]: E1105 18:10:08.613956    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830208612853413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844661 -n ha-844661
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1105 18:10:15.279984   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.396811398s)
ha_test.go:415: expected profile "ha-844661" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844661\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-844661\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-844661\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.48\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.38\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.52\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.89\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\
"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844661 -n ha-844661
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 logs -n 25: (1.37567332s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m03_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m04 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp testdata/cp-test.txt                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m04_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03:/home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m03 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-844661 node stop m02 -v=7                                                     | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:03:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:03:20.652608   27131 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:03:20.652749   27131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:03:20.652760   27131 out.go:358] Setting ErrFile to fd 2...
	I1105 18:03:20.652767   27131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:03:20.652948   27131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:03:20.653500   27131 out.go:352] Setting JSON to false
	I1105 18:03:20.654349   27131 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2743,"bootTime":1730827058,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:03:20.654437   27131 start.go:139] virtualization: kvm guest
	I1105 18:03:20.656534   27131 out.go:177] * [ha-844661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:03:20.657972   27131 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:03:20.658005   27131 notify.go:220] Checking for updates...
	I1105 18:03:20.660463   27131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:03:20.661864   27131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:03:20.663111   27131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:20.664367   27131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:03:20.665603   27131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:03:20.666934   27131 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:03:20.701089   27131 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 18:03:20.702358   27131 start.go:297] selected driver: kvm2
	I1105 18:03:20.702375   27131 start.go:901] validating driver "kvm2" against <nil>
	I1105 18:03:20.702385   27131 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:03:20.703116   27131 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:03:20.703189   27131 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:03:20.718290   27131 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:03:20.718330   27131 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 18:03:20.718556   27131 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:03:20.718584   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:20.718622   27131 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1105 18:03:20.718632   27131 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 18:03:20.718676   27131 start.go:340] cluster config:
	{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1105 18:03:20.718795   27131 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:03:20.720599   27131 out.go:177] * Starting "ha-844661" primary control-plane node in "ha-844661" cluster
	I1105 18:03:20.721815   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:03:20.721849   27131 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:03:20.721872   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:03:20.721982   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:03:20.721996   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:03:20.722409   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:03:20.722435   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json: {Name:mkaefcdd76905e10868a2bf21132faf3026da59d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:20.722574   27131 start.go:360] acquireMachinesLock for ha-844661: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:03:20.722613   27131 start.go:364] duration metric: took 21.652µs to acquireMachinesLock for "ha-844661"
	I1105 18:03:20.722627   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:03:20.722690   27131 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 18:03:20.724172   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:03:20.724279   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:03:20.724320   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:03:20.738289   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I1105 18:03:20.738756   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:03:20.739283   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:03:20.739302   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:03:20.739702   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:03:20.739881   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:20.740007   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:20.740175   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:03:20.740205   27131 client.go:168] LocalClient.Create starting
	I1105 18:03:20.740238   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:03:20.740272   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:03:20.740288   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:03:20.740341   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:03:20.740359   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:03:20.740374   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:03:20.740388   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:03:20.740400   27131 main.go:141] libmachine: (ha-844661) Calling .PreCreateCheck
	I1105 18:03:20.740713   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:20.741068   27131 main.go:141] libmachine: Creating machine...
	I1105 18:03:20.741080   27131 main.go:141] libmachine: (ha-844661) Calling .Create
	I1105 18:03:20.741210   27131 main.go:141] libmachine: (ha-844661) Creating KVM machine...
	I1105 18:03:20.742313   27131 main.go:141] libmachine: (ha-844661) DBG | found existing default KVM network
	I1105 18:03:20.742933   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:20.742806   27154 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1105 18:03:20.742963   27131 main.go:141] libmachine: (ha-844661) DBG | created network xml: 
	I1105 18:03:20.742994   27131 main.go:141] libmachine: (ha-844661) DBG | <network>
	I1105 18:03:20.743008   27131 main.go:141] libmachine: (ha-844661) DBG |   <name>mk-ha-844661</name>
	I1105 18:03:20.743015   27131 main.go:141] libmachine: (ha-844661) DBG |   <dns enable='no'/>
	I1105 18:03:20.743024   27131 main.go:141] libmachine: (ha-844661) DBG |   
	I1105 18:03:20.743029   27131 main.go:141] libmachine: (ha-844661) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1105 18:03:20.743036   27131 main.go:141] libmachine: (ha-844661) DBG |     <dhcp>
	I1105 18:03:20.743041   27131 main.go:141] libmachine: (ha-844661) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1105 18:03:20.743049   27131 main.go:141] libmachine: (ha-844661) DBG |     </dhcp>
	I1105 18:03:20.743053   27131 main.go:141] libmachine: (ha-844661) DBG |   </ip>
	I1105 18:03:20.743060   27131 main.go:141] libmachine: (ha-844661) DBG |   
	I1105 18:03:20.743066   27131 main.go:141] libmachine: (ha-844661) DBG | </network>
	I1105 18:03:20.743074   27131 main.go:141] libmachine: (ha-844661) DBG | 
	I1105 18:03:20.748364   27131 main.go:141] libmachine: (ha-844661) DBG | trying to create private KVM network mk-ha-844661 192.168.39.0/24...
	I1105 18:03:20.811114   27131 main.go:141] libmachine: (ha-844661) DBG | private KVM network mk-ha-844661 192.168.39.0/24 created
	I1105 18:03:20.811141   27131 main.go:141] libmachine: (ha-844661) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 ...
	I1105 18:03:20.811159   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:20.811087   27154 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:20.811177   27131 main.go:141] libmachine: (ha-844661) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:03:20.811237   27131 main.go:141] libmachine: (ha-844661) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:03:21.057798   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.057650   27154 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa...
	I1105 18:03:21.226724   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.226590   27154 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/ha-844661.rawdisk...
	I1105 18:03:21.226750   27131 main.go:141] libmachine: (ha-844661) DBG | Writing magic tar header
	I1105 18:03:21.226760   27131 main.go:141] libmachine: (ha-844661) DBG | Writing SSH key tar header
	I1105 18:03:21.226768   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.226707   27154 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 ...
	I1105 18:03:21.226781   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661
	I1105 18:03:21.226859   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 (perms=drwx------)
	I1105 18:03:21.226880   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:03:21.226887   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:03:21.226897   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:21.226904   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:03:21.226909   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:03:21.226916   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:03:21.226920   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:03:21.226927   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home
	I1105 18:03:21.226932   27131 main.go:141] libmachine: (ha-844661) DBG | Skipping /home - not owner
	I1105 18:03:21.226941   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:03:21.226950   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:03:21.226957   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:03:21.226962   27131 main.go:141] libmachine: (ha-844661) Creating domain...
	I1105 18:03:21.228177   27131 main.go:141] libmachine: (ha-844661) define libvirt domain using xml: 
	I1105 18:03:21.228198   27131 main.go:141] libmachine: (ha-844661) <domain type='kvm'>
	I1105 18:03:21.228204   27131 main.go:141] libmachine: (ha-844661)   <name>ha-844661</name>
	I1105 18:03:21.228209   27131 main.go:141] libmachine: (ha-844661)   <memory unit='MiB'>2200</memory>
	I1105 18:03:21.228214   27131 main.go:141] libmachine: (ha-844661)   <vcpu>2</vcpu>
	I1105 18:03:21.228218   27131 main.go:141] libmachine: (ha-844661)   <features>
	I1105 18:03:21.228223   27131 main.go:141] libmachine: (ha-844661)     <acpi/>
	I1105 18:03:21.228228   27131 main.go:141] libmachine: (ha-844661)     <apic/>
	I1105 18:03:21.228233   27131 main.go:141] libmachine: (ha-844661)     <pae/>
	I1105 18:03:21.228241   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228249   27131 main.go:141] libmachine: (ha-844661)   </features>
	I1105 18:03:21.228254   27131 main.go:141] libmachine: (ha-844661)   <cpu mode='host-passthrough'>
	I1105 18:03:21.228261   27131 main.go:141] libmachine: (ha-844661)   
	I1105 18:03:21.228268   27131 main.go:141] libmachine: (ha-844661)   </cpu>
	I1105 18:03:21.228298   27131 main.go:141] libmachine: (ha-844661)   <os>
	I1105 18:03:21.228318   27131 main.go:141] libmachine: (ha-844661)     <type>hvm</type>
	I1105 18:03:21.228325   27131 main.go:141] libmachine: (ha-844661)     <boot dev='cdrom'/>
	I1105 18:03:21.228329   27131 main.go:141] libmachine: (ha-844661)     <boot dev='hd'/>
	I1105 18:03:21.228355   27131 main.go:141] libmachine: (ha-844661)     <bootmenu enable='no'/>
	I1105 18:03:21.228375   27131 main.go:141] libmachine: (ha-844661)   </os>
	I1105 18:03:21.228385   27131 main.go:141] libmachine: (ha-844661)   <devices>
	I1105 18:03:21.228403   27131 main.go:141] libmachine: (ha-844661)     <disk type='file' device='cdrom'>
	I1105 18:03:21.228418   27131 main.go:141] libmachine: (ha-844661)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/boot2docker.iso'/>
	I1105 18:03:21.228429   27131 main.go:141] libmachine: (ha-844661)       <target dev='hdc' bus='scsi'/>
	I1105 18:03:21.228437   27131 main.go:141] libmachine: (ha-844661)       <readonly/>
	I1105 18:03:21.228450   27131 main.go:141] libmachine: (ha-844661)     </disk>
	I1105 18:03:21.228462   27131 main.go:141] libmachine: (ha-844661)     <disk type='file' device='disk'>
	I1105 18:03:21.228474   27131 main.go:141] libmachine: (ha-844661)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:03:21.228488   27131 main.go:141] libmachine: (ha-844661)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/ha-844661.rawdisk'/>
	I1105 18:03:21.228497   27131 main.go:141] libmachine: (ha-844661)       <target dev='hda' bus='virtio'/>
	I1105 18:03:21.228502   27131 main.go:141] libmachine: (ha-844661)     </disk>
	I1105 18:03:21.228511   27131 main.go:141] libmachine: (ha-844661)     <interface type='network'>
	I1105 18:03:21.228519   27131 main.go:141] libmachine: (ha-844661)       <source network='mk-ha-844661'/>
	I1105 18:03:21.228532   27131 main.go:141] libmachine: (ha-844661)       <model type='virtio'/>
	I1105 18:03:21.228539   27131 main.go:141] libmachine: (ha-844661)     </interface>
	I1105 18:03:21.228551   27131 main.go:141] libmachine: (ha-844661)     <interface type='network'>
	I1105 18:03:21.228560   27131 main.go:141] libmachine: (ha-844661)       <source network='default'/>
	I1105 18:03:21.228570   27131 main.go:141] libmachine: (ha-844661)       <model type='virtio'/>
	I1105 18:03:21.228579   27131 main.go:141] libmachine: (ha-844661)     </interface>
	I1105 18:03:21.228587   27131 main.go:141] libmachine: (ha-844661)     <serial type='pty'>
	I1105 18:03:21.228599   27131 main.go:141] libmachine: (ha-844661)       <target port='0'/>
	I1105 18:03:21.228607   27131 main.go:141] libmachine: (ha-844661)     </serial>
	I1105 18:03:21.228613   27131 main.go:141] libmachine: (ha-844661)     <console type='pty'>
	I1105 18:03:21.228629   27131 main.go:141] libmachine: (ha-844661)       <target type='serial' port='0'/>
	I1105 18:03:21.228642   27131 main.go:141] libmachine: (ha-844661)     </console>
	I1105 18:03:21.228653   27131 main.go:141] libmachine: (ha-844661)     <rng model='virtio'>
	I1105 18:03:21.228670   27131 main.go:141] libmachine: (ha-844661)       <backend model='random'>/dev/random</backend>
	I1105 18:03:21.228679   27131 main.go:141] libmachine: (ha-844661)     </rng>
	I1105 18:03:21.228687   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228694   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228699   27131 main.go:141] libmachine: (ha-844661)   </devices>
	I1105 18:03:21.228707   27131 main.go:141] libmachine: (ha-844661) </domain>
	I1105 18:03:21.228717   27131 main.go:141] libmachine: (ha-844661) 
	I1105 18:03:21.232718   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:b2:92:26 in network default
	I1105 18:03:21.233193   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:21.233215   27131 main.go:141] libmachine: (ha-844661) Ensuring networks are active...
	I1105 18:03:21.233765   27131 main.go:141] libmachine: (ha-844661) Ensuring network default is active
	I1105 18:03:21.234017   27131 main.go:141] libmachine: (ha-844661) Ensuring network mk-ha-844661 is active
	I1105 18:03:21.234455   27131 main.go:141] libmachine: (ha-844661) Getting domain xml...
	I1105 18:03:21.235089   27131 main.go:141] libmachine: (ha-844661) Creating domain...
	I1105 18:03:22.412574   27131 main.go:141] libmachine: (ha-844661) Waiting to get IP...
	I1105 18:03:22.413266   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:22.413608   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:22.413630   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:22.413577   27154 retry.go:31] will retry after 279.954438ms: waiting for machine to come up
	I1105 18:03:22.695059   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:22.695483   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:22.695511   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:22.695451   27154 retry.go:31] will retry after 304.898477ms: waiting for machine to come up
	I1105 18:03:23.001972   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.002322   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.002343   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.002303   27154 retry.go:31] will retry after 443.493793ms: waiting for machine to come up
	I1105 18:03:23.446683   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.447042   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.447069   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.446999   27154 retry.go:31] will retry after 509.391538ms: waiting for machine to come up
	I1105 18:03:23.957539   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.957900   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.957927   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.957847   27154 retry.go:31] will retry after 602.880889ms: waiting for machine to come up
	I1105 18:03:24.562659   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:24.563119   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:24.563144   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:24.563076   27154 retry.go:31] will retry after 741.734368ms: waiting for machine to come up
	I1105 18:03:25.306116   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:25.306633   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:25.306663   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:25.306587   27154 retry.go:31] will retry after 1.015957471s: waiting for machine to come up
	I1105 18:03:26.324342   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:26.324731   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:26.324755   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:26.324683   27154 retry.go:31] will retry after 1.378698886s: waiting for machine to come up
	I1105 18:03:27.705172   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:27.705551   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:27.705575   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:27.705506   27154 retry.go:31] will retry after 1.576136067s: waiting for machine to come up
	I1105 18:03:29.283960   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:29.284380   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:29.284417   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:29.284337   27154 retry.go:31] will retry after 2.253581174s: waiting for machine to come up
	I1105 18:03:31.539436   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:31.539830   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:31.539860   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:31.539773   27154 retry.go:31] will retry after 1.761371484s: waiting for machine to come up
	I1105 18:03:33.303719   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:33.304166   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:33.304190   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:33.304128   27154 retry.go:31] will retry after 2.85080226s: waiting for machine to come up
	I1105 18:03:36.156486   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:36.156898   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:36.156920   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:36.156851   27154 retry.go:31] will retry after 4.320693691s: waiting for machine to come up
	I1105 18:03:40.482276   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.482645   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has current primary IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.482666   27131 main.go:141] libmachine: (ha-844661) Found IP for machine: 192.168.39.48
	I1105 18:03:40.482731   27131 main.go:141] libmachine: (ha-844661) Reserving static IP address...
	I1105 18:03:40.483186   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find host DHCP lease matching {name: "ha-844661", mac: "52:54:00:ba:57:dd", ip: "192.168.39.48"} in network mk-ha-844661
	I1105 18:03:40.553039   27131 main.go:141] libmachine: (ha-844661) DBG | Getting to WaitForSSH function...
	I1105 18:03:40.553065   27131 main.go:141] libmachine: (ha-844661) Reserved static IP address: 192.168.39.48
	I1105 18:03:40.553074   27131 main.go:141] libmachine: (ha-844661) Waiting for SSH to be available...
	I1105 18:03:40.555541   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.555889   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.555921   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.556076   27131 main.go:141] libmachine: (ha-844661) DBG | Using SSH client type: external
	I1105 18:03:40.556099   27131 main.go:141] libmachine: (ha-844661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa (-rw-------)
	I1105 18:03:40.556130   27131 main.go:141] libmachine: (ha-844661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:03:40.556164   27131 main.go:141] libmachine: (ha-844661) DBG | About to run SSH command:
	I1105 18:03:40.556196   27131 main.go:141] libmachine: (ha-844661) DBG | exit 0
	I1105 18:03:40.678881   27131 main.go:141] libmachine: (ha-844661) DBG | SSH cmd err, output: <nil>: 
	I1105 18:03:40.679168   27131 main.go:141] libmachine: (ha-844661) KVM machine creation complete!
	I1105 18:03:40.679431   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:40.680021   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:40.680197   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:40.680362   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:03:40.680377   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:03:40.681549   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:03:40.681565   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:03:40.681581   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:03:40.681589   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.683878   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.684197   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.684222   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.684354   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.684522   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.684666   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.684789   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.684936   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.685164   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.685176   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:03:40.782106   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:03:40.782126   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:03:40.782134   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.785142   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.785540   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.785569   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.785664   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.785868   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.786031   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.786159   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.786354   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.786515   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.786526   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:03:40.883619   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:03:40.883676   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:03:40.883682   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:03:40.883690   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:40.883923   27131 buildroot.go:166] provisioning hostname "ha-844661"
	I1105 18:03:40.883949   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:40.884120   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.886507   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.886833   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.886857   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.886980   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.887151   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.887291   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.887396   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.887549   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.887741   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.887756   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661 && echo "ha-844661" | sudo tee /etc/hostname
	I1105 18:03:41.000392   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661
	
	I1105 18:03:41.000420   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.003294   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.003567   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.003608   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.003744   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.003933   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.004103   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.004242   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.004353   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.004531   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.004545   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:03:41.111348   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:03:41.111383   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:03:41.111432   27131 buildroot.go:174] setting up certificates
	I1105 18:03:41.111449   27131 provision.go:84] configureAuth start
	I1105 18:03:41.111460   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:41.111736   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.114450   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.114812   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.114841   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.114944   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.117124   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.117436   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.117462   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.117573   27131 provision.go:143] copyHostCerts
	I1105 18:03:41.117613   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:03:41.117655   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:03:41.117671   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:03:41.117771   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:03:41.117875   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:03:41.117903   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:03:41.117913   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:03:41.117953   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:03:41.118004   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:03:41.118021   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:03:41.118027   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:03:41.118050   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:03:41.118095   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661 san=[127.0.0.1 192.168.39.48 ha-844661 localhost minikube]
	I1105 18:03:41.208702   27131 provision.go:177] copyRemoteCerts
	I1105 18:03:41.208760   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:03:41.208783   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.211467   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.211827   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.211850   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.212052   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.212204   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.212341   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.212443   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.296812   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:03:41.296897   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:03:41.319712   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:03:41.319772   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:03:41.342415   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:03:41.342483   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1105 18:03:41.365050   27131 provision.go:87] duration metric: took 253.585291ms to configureAuth
	I1105 18:03:41.365082   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:03:41.365296   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:03:41.365378   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.368515   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.368840   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.368869   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.369025   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.369189   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.369363   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.369489   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.369646   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.369808   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.369821   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:03:41.576635   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:03:41.576666   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:03:41.576676   27131 main.go:141] libmachine: (ha-844661) Calling .GetURL
	I1105 18:03:41.577929   27131 main.go:141] libmachine: (ha-844661) DBG | Using libvirt version 6000000
	I1105 18:03:41.580297   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.580615   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.580654   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.580760   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:03:41.580772   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:03:41.580778   27131 client.go:171] duration metric: took 20.840565211s to LocalClient.Create
	I1105 18:03:41.580795   27131 start.go:167] duration metric: took 20.84062429s to libmachine.API.Create "ha-844661"
	I1105 18:03:41.580805   27131 start.go:293] postStartSetup for "ha-844661" (driver="kvm2")
	I1105 18:03:41.580814   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:03:41.580829   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.581046   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:03:41.581068   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.583124   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.583501   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.583522   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.583601   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.583779   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.583943   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.584110   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.661161   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:03:41.665033   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:03:41.665062   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:03:41.665127   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:03:41.665231   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:03:41.665252   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:03:41.665373   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:03:41.674466   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:03:41.696494   27131 start.go:296] duration metric: took 115.67878ms for postStartSetup
	I1105 18:03:41.696542   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:41.697138   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.699655   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.699984   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.700009   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.700292   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:03:41.700505   27131 start.go:128] duration metric: took 20.977803727s to createHost
	I1105 18:03:41.700531   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.702386   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.702601   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.702627   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.702711   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.702863   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.703005   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.703106   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.703251   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.703451   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.703464   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:03:41.803411   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829821.777547713
	
	I1105 18:03:41.803432   27131 fix.go:216] guest clock: 1730829821.777547713
	I1105 18:03:41.803441   27131 fix.go:229] Guest: 2024-11-05 18:03:41.777547713 +0000 UTC Remote: 2024-11-05 18:03:41.700519186 +0000 UTC m=+21.085212205 (delta=77.028527ms)
	I1105 18:03:41.803466   27131 fix.go:200] guest clock delta is within tolerance: 77.028527ms
	I1105 18:03:41.803472   27131 start.go:83] releasing machines lock for "ha-844661", held for 21.080851922s
	I1105 18:03:41.803504   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.803818   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.806212   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.806544   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.806574   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.806731   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807182   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807323   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807421   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:03:41.807458   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.807478   27131 ssh_runner.go:195] Run: cat /version.json
	I1105 18:03:41.807503   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.809937   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810070   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810265   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.810291   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810383   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.810476   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.810506   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810517   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.810650   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.810655   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.810815   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.810809   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.810922   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.811058   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.883551   27131 ssh_runner.go:195] Run: systemctl --version
	I1105 18:03:41.923044   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:03:42.072766   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:03:42.079007   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:03:42.079076   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:03:42.094820   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:03:42.094844   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:03:42.094917   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:03:42.118583   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:03:42.138115   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:03:42.138172   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:03:42.152440   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:03:42.166344   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:03:42.279937   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:03:42.434792   27131 docker.go:233] disabling docker service ...
	I1105 18:03:42.434953   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:03:42.449109   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:03:42.461551   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:03:42.578145   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:03:42.699091   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:03:42.712758   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:03:42.730751   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:03:42.730837   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.741264   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:03:42.741334   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.751371   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.761461   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.771733   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:03:42.782235   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.792151   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.809625   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.820631   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:03:42.829567   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:03:42.829657   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:03:42.841074   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:03:42.849804   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:03:42.970294   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:03:43.072129   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:03:43.072202   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:03:43.076505   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:03:43.076553   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:03:43.079876   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:03:43.118292   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:03:43.118368   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:03:43.145365   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:03:43.174475   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:03:43.175688   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:43.178118   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:43.178392   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:43.178429   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:43.178616   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:03:43.182299   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:03:43.194156   27131 kubeadm.go:883] updating cluster {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:03:43.194286   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:03:43.194326   27131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:03:43.224139   27131 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 18:03:43.224200   27131 ssh_runner.go:195] Run: which lz4
	I1105 18:03:43.227717   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1105 18:03:43.227803   27131 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 18:03:43.231367   27131 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 18:03:43.231394   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 18:03:44.421241   27131 crio.go:462] duration metric: took 1.193460189s to copy over tarball
	I1105 18:03:44.421309   27131 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 18:03:46.448289   27131 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.026951778s)
	I1105 18:03:46.448321   27131 crio.go:469] duration metric: took 2.027054899s to extract the tarball
	I1105 18:03:46.448331   27131 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 18:03:46.484203   27131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:03:46.526703   27131 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:03:46.526728   27131 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:03:46.526737   27131 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.2 crio true true} ...
	I1105 18:03:46.526839   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:03:46.526923   27131 ssh_runner.go:195] Run: crio config
	I1105 18:03:46.568508   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:46.568526   27131 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 18:03:46.568535   27131 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:03:46.568555   27131 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844661 NodeName:ha-844661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:03:46.568670   27131 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.48"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:03:46.568726   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:03:46.568770   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:03:46.584044   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:03:46.584179   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:03:46.584237   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:03:46.593564   27131 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:03:46.593616   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 18:03:46.602413   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1105 18:03:46.618161   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:03:46.634586   27131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1105 18:03:46.650181   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1105 18:03:46.665377   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:03:46.668925   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:03:46.679986   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:03:46.788039   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:03:46.803466   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.48
	I1105 18:03:46.803487   27131 certs.go:194] generating shared ca certs ...
	I1105 18:03:46.803503   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.803661   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:03:46.803717   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:03:46.803731   27131 certs.go:256] generating profile certs ...
	I1105 18:03:46.803788   27131 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:03:46.803806   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt with IP's: []
	I1105 18:03:46.868048   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt ...
	I1105 18:03:46.868073   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt: {Name:mk1b1384fd11cca80823d77e811ce40ed13a39a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.868260   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key ...
	I1105 18:03:46.868273   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key: {Name:mk63b8cd2995063e8f249e25659d0d581c1c609d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.868372   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a
	I1105 18:03:46.868394   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.254]
	I1105 18:03:47.168393   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a ...
	I1105 18:03:47.168422   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a: {Name:mkfb181b3090bd8c3e2b4c01d3e8bebb9949241a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.168598   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a ...
	I1105 18:03:47.168612   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a: {Name:mk8ee51e070e9f8f3516c15edb86d588cc060b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.168716   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:03:47.168827   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:03:47.168910   27131 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:03:47.168929   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt with IP's: []
	I1105 18:03:47.272330   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt ...
	I1105 18:03:47.272363   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt: {Name:mkef37902a8eaa82f4513587418829011c41aa9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.272551   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key ...
	I1105 18:03:47.272567   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key: {Name:mka47632f74c8924a4575ad6d317d9db035f5aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.272701   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:03:47.272727   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:03:47.272746   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:03:47.272764   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:03:47.272788   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:03:47.272803   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:03:47.272820   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:03:47.272860   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:03:47.272935   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:03:47.272983   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:03:47.272995   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:03:47.273029   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:03:47.273061   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:03:47.273095   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:03:47.273147   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:03:47.273189   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.273209   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.273227   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.273815   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:03:47.298487   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:03:47.321311   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:03:47.343337   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:03:47.365041   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 18:03:47.387466   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:03:47.409231   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:03:47.430651   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:03:47.452212   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:03:47.474137   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:03:47.495806   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:03:47.517223   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:03:47.532167   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:03:47.537576   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:03:47.549952   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.556864   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.556922   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.564072   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:03:47.575807   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:03:47.588714   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.593382   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.593445   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.601274   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:03:47.613497   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:03:47.623268   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.627461   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.627512   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.632828   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:03:47.642821   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:03:47.646365   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:03:47.646411   27131 kubeadm.go:392] StartCluster: {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:03:47.646477   27131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:03:47.646544   27131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:03:47.682117   27131 cri.go:89] found id: ""
	I1105 18:03:47.682186   27131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:03:47.691260   27131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 18:03:47.700258   27131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:03:47.708885   27131 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:03:47.708907   27131 kubeadm.go:157] found existing configuration files:
	
	I1105 18:03:47.708950   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:03:47.717439   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:03:47.717497   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:03:47.726246   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:03:47.734558   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:03:47.734611   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:03:47.743183   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:03:47.751387   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:03:47.751433   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:03:47.760203   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:03:47.768178   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:03:47.768234   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:03:47.776770   27131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 18:03:47.967353   27131 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 18:03:59.183523   27131 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 18:03:59.183604   27131 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 18:03:59.183699   27131 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 18:03:59.183848   27131 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 18:03:59.183952   27131 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 18:03:59.184008   27131 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 18:03:59.185602   27131 out.go:235]   - Generating certificates and keys ...
	I1105 18:03:59.185696   27131 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 18:03:59.185773   27131 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 18:03:59.185856   27131 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 18:03:59.185912   27131 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 18:03:59.185997   27131 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 18:03:59.186086   27131 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 18:03:59.186173   27131 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 18:03:59.186341   27131 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-844661 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1105 18:03:59.186418   27131 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 18:03:59.186574   27131 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-844661 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1105 18:03:59.186680   27131 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 18:03:59.186753   27131 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 18:03:59.186826   27131 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 18:03:59.186915   27131 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 18:03:59.187003   27131 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 18:03:59.187068   27131 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 18:03:59.187122   27131 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 18:03:59.187247   27131 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 18:03:59.187350   27131 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 18:03:59.187464   27131 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 18:03:59.187595   27131 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 18:03:59.189162   27131 out.go:235]   - Booting up control plane ...
	I1105 18:03:59.189263   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 18:03:59.189330   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 18:03:59.189411   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 18:03:59.189560   27131 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 18:03:59.189674   27131 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 18:03:59.189732   27131 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 18:03:59.189870   27131 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 18:03:59.190000   27131 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 18:03:59.190063   27131 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.0020676s
	I1105 18:03:59.190152   27131 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 18:03:59.190232   27131 kubeadm.go:310] [api-check] The API server is healthy after 5.797330373s
	I1105 18:03:59.190371   27131 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 18:03:59.190545   27131 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 18:03:59.190621   27131 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 18:03:59.190819   27131 kubeadm.go:310] [mark-control-plane] Marking the node ha-844661 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 18:03:59.190908   27131 kubeadm.go:310] [bootstrap-token] Using token: 87pfeh.t954ki35wy37ojkf
	I1105 18:03:59.192164   27131 out.go:235]   - Configuring RBAC rules ...
	I1105 18:03:59.192251   27131 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 18:03:59.192336   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 18:03:59.192519   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 18:03:59.192749   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 18:03:59.192914   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 18:03:59.193036   27131 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 18:03:59.193159   27131 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 18:03:59.193205   27131 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 18:03:59.193263   27131 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 18:03:59.193287   27131 kubeadm.go:310] 
	I1105 18:03:59.193351   27131 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 18:03:59.193361   27131 kubeadm.go:310] 
	I1105 18:03:59.193483   27131 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 18:03:59.193498   27131 kubeadm.go:310] 
	I1105 18:03:59.193525   27131 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 18:03:59.193576   27131 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 18:03:59.193636   27131 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 18:03:59.193642   27131 kubeadm.go:310] 
	I1105 18:03:59.193690   27131 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 18:03:59.193695   27131 kubeadm.go:310] 
	I1105 18:03:59.193734   27131 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 18:03:59.193739   27131 kubeadm.go:310] 
	I1105 18:03:59.193790   27131 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 18:03:59.193854   27131 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 18:03:59.193915   27131 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 18:03:59.193921   27131 kubeadm.go:310] 
	I1105 18:03:59.193994   27131 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 18:03:59.194085   27131 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 18:03:59.194112   27131 kubeadm.go:310] 
	I1105 18:03:59.194272   27131 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 87pfeh.t954ki35wy37ojkf \
	I1105 18:03:59.194366   27131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 18:03:59.194391   27131 kubeadm.go:310] 	--control-plane 
	I1105 18:03:59.194397   27131 kubeadm.go:310] 
	I1105 18:03:59.194470   27131 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 18:03:59.194483   27131 kubeadm.go:310] 
	I1105 18:03:59.194599   27131 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 87pfeh.t954ki35wy37ojkf \
	I1105 18:03:59.194713   27131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 18:03:59.194723   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:59.194729   27131 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 18:03:59.196416   27131 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1105 18:03:59.198072   27131 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1105 18:03:59.203679   27131 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 18:03:59.203699   27131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1105 18:03:59.220864   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 18:03:59.577751   27131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 18:03:59.577851   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:03:59.577925   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661 minikube.k8s.io/updated_at=2024_11_05T18_03_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=true
	I1105 18:03:59.773949   27131 ops.go:34] apiserver oom_adj: -16
	I1105 18:03:59.774061   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:00.274452   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:00.774925   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:01.274873   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:01.774746   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:02.274653   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:02.410257   27131 kubeadm.go:1113] duration metric: took 2.832479659s to wait for elevateKubeSystemPrivileges
	I1105 18:04:02.410297   27131 kubeadm.go:394] duration metric: took 14.763886485s to StartCluster
	I1105 18:04:02.410318   27131 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:02.410399   27131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:02.411281   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:02.411532   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 18:04:02.411550   27131 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:02.411572   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:04:02.411587   27131 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 18:04:02.411670   27131 addons.go:69] Setting storage-provisioner=true in profile "ha-844661"
	I1105 18:04:02.411690   27131 addons.go:234] Setting addon storage-provisioner=true in "ha-844661"
	I1105 18:04:02.411709   27131 addons.go:69] Setting default-storageclass=true in profile "ha-844661"
	I1105 18:04:02.411717   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:02.411726   27131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-844661"
	I1105 18:04:02.411747   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:02.412164   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.412164   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.412207   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.412212   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.427238   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I1105 18:04:02.427311   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I1105 18:04:02.427732   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.427772   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.428176   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.428198   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.428276   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.428292   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.428565   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.428588   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.428730   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.429124   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.429169   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.430653   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:02.430886   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 18:04:02.431352   27131 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 18:04:02.431554   27131 addons.go:234] Setting addon default-storageclass=true in "ha-844661"
	I1105 18:04:02.431592   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:02.431879   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.431911   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.444788   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1105 18:04:02.445225   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.445776   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.445800   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.446109   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.446308   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.446715   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I1105 18:04:02.447172   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.447626   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.447652   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.447978   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.447989   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:02.448526   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.448566   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.450053   27131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:04:02.451430   27131 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:04:02.451447   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 18:04:02.451465   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:02.453936   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.454325   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:02.454352   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.454596   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:02.454747   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:02.454895   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:02.455039   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:02.463344   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I1105 18:04:02.463824   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.464272   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.464295   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.464580   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.464736   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.466150   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:02.466325   27131 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 18:04:02.466346   27131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 18:04:02.466366   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:02.468861   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.469292   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:02.469320   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.469478   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:02.469641   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:02.469795   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:02.469919   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:02.559386   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 18:04:02.582601   27131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:04:02.634107   27131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 18:04:03.029603   27131 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1105 18:04:03.212900   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.212938   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.212957   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213012   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213238   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213254   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213263   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.213301   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213309   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213317   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213327   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.213335   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213567   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.213576   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.213601   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213608   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213606   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213626   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213684   27131 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 18:04:03.213697   27131 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 18:04:03.213833   27131 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1105 18:04:03.213847   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:03.213858   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:03.213863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:03.230734   27131 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1105 18:04:03.231584   27131 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1105 18:04:03.231606   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:03.231617   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:03.231624   27131 round_trippers.go:473]     Content-Type: application/json
	I1105 18:04:03.231628   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:03.238223   27131 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:04:03.238372   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.238386   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.238717   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.238773   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.238806   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.241254   27131 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1105 18:04:03.242442   27131 addons.go:510] duration metric: took 830.859112ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1105 18:04:03.242476   27131 start.go:246] waiting for cluster config update ...
	I1105 18:04:03.242491   27131 start.go:255] writing updated cluster config ...
	I1105 18:04:03.244187   27131 out.go:201] 
	I1105 18:04:03.246027   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:03.246146   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:03.247790   27131 out.go:177] * Starting "ha-844661-m02" control-plane node in "ha-844661" cluster
	I1105 18:04:03.248926   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:04:03.248959   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:04:03.249079   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:04:03.249097   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:04:03.249198   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:03.249437   27131 start.go:360] acquireMachinesLock for ha-844661-m02: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:04:03.249497   27131 start.go:364] duration metric: took 35.772µs to acquireMachinesLock for "ha-844661-m02"
	I1105 18:04:03.249518   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:03.249605   27131 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1105 18:04:03.251175   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:04:03.251287   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:03.251335   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:03.267010   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I1105 18:04:03.267624   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:03.268242   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:03.268268   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:03.268591   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:03.268765   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:03.268983   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:03.269146   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:04:03.269172   27131 client.go:168] LocalClient.Create starting
	I1105 18:04:03.269203   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:04:03.269237   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:04:03.269249   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:04:03.269297   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:04:03.269315   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:04:03.269325   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:04:03.269338   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:04:03.269353   27131 main.go:141] libmachine: (ha-844661-m02) Calling .PreCreateCheck
	I1105 18:04:03.269514   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:03.269893   27131 main.go:141] libmachine: Creating machine...
	I1105 18:04:03.269906   27131 main.go:141] libmachine: (ha-844661-m02) Calling .Create
	I1105 18:04:03.270065   27131 main.go:141] libmachine: (ha-844661-m02) Creating KVM machine...
	I1105 18:04:03.271308   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found existing default KVM network
	I1105 18:04:03.271402   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found existing private KVM network mk-ha-844661
	I1105 18:04:03.271535   27131 main.go:141] libmachine: (ha-844661-m02) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 ...
	I1105 18:04:03.271561   27131 main.go:141] libmachine: (ha-844661-m02) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:04:03.271623   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.271523   27490 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:04:03.271709   27131 main.go:141] libmachine: (ha-844661-m02) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:04:03.505902   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.505765   27490 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa...
	I1105 18:04:03.597676   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.597557   27490 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/ha-844661-m02.rawdisk...
	I1105 18:04:03.597706   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Writing magic tar header
	I1105 18:04:03.597716   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Writing SSH key tar header
	I1105 18:04:03.597724   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.597692   27490 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 ...
	I1105 18:04:03.597812   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02
	I1105 18:04:03.597845   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:04:03.597903   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:04:03.597916   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 (perms=drwx------)
	I1105 18:04:03.597939   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:04:03.597948   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:04:03.597957   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:04:03.597965   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:04:03.597973   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:04:03.597977   27131 main.go:141] libmachine: (ha-844661-m02) Creating domain...
	I1105 18:04:03.598013   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:04:03.598038   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:04:03.598049   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:04:03.598061   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home
	I1105 18:04:03.598072   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Skipping /home - not owner
	I1105 18:04:03.598898   27131 main.go:141] libmachine: (ha-844661-m02) define libvirt domain using xml: 
	I1105 18:04:03.598916   27131 main.go:141] libmachine: (ha-844661-m02) <domain type='kvm'>
	I1105 18:04:03.598925   27131 main.go:141] libmachine: (ha-844661-m02)   <name>ha-844661-m02</name>
	I1105 18:04:03.598932   27131 main.go:141] libmachine: (ha-844661-m02)   <memory unit='MiB'>2200</memory>
	I1105 18:04:03.598941   27131 main.go:141] libmachine: (ha-844661-m02)   <vcpu>2</vcpu>
	I1105 18:04:03.598947   27131 main.go:141] libmachine: (ha-844661-m02)   <features>
	I1105 18:04:03.598959   27131 main.go:141] libmachine: (ha-844661-m02)     <acpi/>
	I1105 18:04:03.598965   27131 main.go:141] libmachine: (ha-844661-m02)     <apic/>
	I1105 18:04:03.598984   27131 main.go:141] libmachine: (ha-844661-m02)     <pae/>
	I1105 18:04:03.598993   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599024   27131 main.go:141] libmachine: (ha-844661-m02)   </features>
	I1105 18:04:03.599044   27131 main.go:141] libmachine: (ha-844661-m02)   <cpu mode='host-passthrough'>
	I1105 18:04:03.599055   27131 main.go:141] libmachine: (ha-844661-m02)   
	I1105 18:04:03.599061   27131 main.go:141] libmachine: (ha-844661-m02)   </cpu>
	I1105 18:04:03.599069   27131 main.go:141] libmachine: (ha-844661-m02)   <os>
	I1105 18:04:03.599077   27131 main.go:141] libmachine: (ha-844661-m02)     <type>hvm</type>
	I1105 18:04:03.599086   27131 main.go:141] libmachine: (ha-844661-m02)     <boot dev='cdrom'/>
	I1105 18:04:03.599093   27131 main.go:141] libmachine: (ha-844661-m02)     <boot dev='hd'/>
	I1105 18:04:03.599109   27131 main.go:141] libmachine: (ha-844661-m02)     <bootmenu enable='no'/>
	I1105 18:04:03.599120   27131 main.go:141] libmachine: (ha-844661-m02)   </os>
	I1105 18:04:03.599128   27131 main.go:141] libmachine: (ha-844661-m02)   <devices>
	I1105 18:04:03.599142   27131 main.go:141] libmachine: (ha-844661-m02)     <disk type='file' device='cdrom'>
	I1105 18:04:03.599158   27131 main.go:141] libmachine: (ha-844661-m02)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/boot2docker.iso'/>
	I1105 18:04:03.599168   27131 main.go:141] libmachine: (ha-844661-m02)       <target dev='hdc' bus='scsi'/>
	I1105 18:04:03.599177   27131 main.go:141] libmachine: (ha-844661-m02)       <readonly/>
	I1105 18:04:03.599191   27131 main.go:141] libmachine: (ha-844661-m02)     </disk>
	I1105 18:04:03.599203   27131 main.go:141] libmachine: (ha-844661-m02)     <disk type='file' device='disk'>
	I1105 18:04:03.599219   27131 main.go:141] libmachine: (ha-844661-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:04:03.599234   27131 main.go:141] libmachine: (ha-844661-m02)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/ha-844661-m02.rawdisk'/>
	I1105 18:04:03.599245   27131 main.go:141] libmachine: (ha-844661-m02)       <target dev='hda' bus='virtio'/>
	I1105 18:04:03.599254   27131 main.go:141] libmachine: (ha-844661-m02)     </disk>
	I1105 18:04:03.599264   27131 main.go:141] libmachine: (ha-844661-m02)     <interface type='network'>
	I1105 18:04:03.599277   27131 main.go:141] libmachine: (ha-844661-m02)       <source network='mk-ha-844661'/>
	I1105 18:04:03.599295   27131 main.go:141] libmachine: (ha-844661-m02)       <model type='virtio'/>
	I1105 18:04:03.599306   27131 main.go:141] libmachine: (ha-844661-m02)     </interface>
	I1105 18:04:03.599316   27131 main.go:141] libmachine: (ha-844661-m02)     <interface type='network'>
	I1105 18:04:03.599328   27131 main.go:141] libmachine: (ha-844661-m02)       <source network='default'/>
	I1105 18:04:03.599336   27131 main.go:141] libmachine: (ha-844661-m02)       <model type='virtio'/>
	I1105 18:04:03.599346   27131 main.go:141] libmachine: (ha-844661-m02)     </interface>
	I1105 18:04:03.599360   27131 main.go:141] libmachine: (ha-844661-m02)     <serial type='pty'>
	I1105 18:04:03.599371   27131 main.go:141] libmachine: (ha-844661-m02)       <target port='0'/>
	I1105 18:04:03.599379   27131 main.go:141] libmachine: (ha-844661-m02)     </serial>
	I1105 18:04:03.599388   27131 main.go:141] libmachine: (ha-844661-m02)     <console type='pty'>
	I1105 18:04:03.599395   27131 main.go:141] libmachine: (ha-844661-m02)       <target type='serial' port='0'/>
	I1105 18:04:03.599405   27131 main.go:141] libmachine: (ha-844661-m02)     </console>
	I1105 18:04:03.599414   27131 main.go:141] libmachine: (ha-844661-m02)     <rng model='virtio'>
	I1105 18:04:03.599426   27131 main.go:141] libmachine: (ha-844661-m02)       <backend model='random'>/dev/random</backend>
	I1105 18:04:03.599433   27131 main.go:141] libmachine: (ha-844661-m02)     </rng>
	I1105 18:04:03.599441   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599450   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599458   27131 main.go:141] libmachine: (ha-844661-m02)   </devices>
	I1105 18:04:03.599468   27131 main.go:141] libmachine: (ha-844661-m02) </domain>
	I1105 18:04:03.599478   27131 main.go:141] libmachine: (ha-844661-m02) 
	I1105 18:04:03.606202   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:bc:44:b3 in network default
	I1105 18:04:03.606844   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring networks are active...
	I1105 18:04:03.606873   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:03.607579   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring network default is active
	I1105 18:04:03.607877   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring network mk-ha-844661 is active
	I1105 18:04:03.608339   27131 main.go:141] libmachine: (ha-844661-m02) Getting domain xml...
	I1105 18:04:03.609124   27131 main.go:141] libmachine: (ha-844661-m02) Creating domain...
	I1105 18:04:04.804854   27131 main.go:141] libmachine: (ha-844661-m02) Waiting to get IP...
	I1105 18:04:04.805676   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:04.806067   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:04.806128   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:04.806059   27490 retry.go:31] will retry after 221.645511ms: waiting for machine to come up
	I1105 18:04:05.029505   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.029976   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.030010   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.029926   27490 retry.go:31] will retry after 382.599739ms: waiting for machine to come up
	I1105 18:04:05.414471   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.414907   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.414933   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.414864   27490 retry.go:31] will retry after 327.048237ms: waiting for machine to come up
	I1105 18:04:05.743302   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.743771   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.743804   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.743710   27490 retry.go:31] will retry after 518.430277ms: waiting for machine to come up
	I1105 18:04:06.263310   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:06.263829   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:06.263853   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:06.263789   27490 retry.go:31] will retry after 629.481848ms: waiting for machine to come up
	I1105 18:04:06.894494   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:06.895089   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:06.895118   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:06.895038   27490 retry.go:31] will retry after 880.755684ms: waiting for machine to come up
	I1105 18:04:07.777105   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:07.777585   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:07.777629   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:07.777517   27490 retry.go:31] will retry after 728.781586ms: waiting for machine to come up
	I1105 18:04:08.507833   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:08.508322   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:08.508350   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:08.508268   27490 retry.go:31] will retry after 1.405343367s: waiting for machine to come up
	I1105 18:04:09.915737   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:09.916175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:09.916206   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:09.916130   27490 retry.go:31] will retry after 1.614277424s: waiting for machine to come up
	I1105 18:04:11.532132   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:11.532606   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:11.532651   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:11.532528   27490 retry.go:31] will retry after 2.182290087s: waiting for machine to come up
	I1105 18:04:13.716671   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:13.717064   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:13.717090   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:13.717036   27490 retry.go:31] will retry after 2.181711488s: waiting for machine to come up
	I1105 18:04:15.901246   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:15.901742   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:15.901769   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:15.901678   27490 retry.go:31] will retry after 3.553887492s: waiting for machine to come up
	I1105 18:04:19.457631   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:19.458252   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:19.458280   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:19.458200   27490 retry.go:31] will retry after 2.842714356s: waiting for machine to come up
	I1105 18:04:22.304175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:22.304555   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:22.304577   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:22.304516   27490 retry.go:31] will retry after 4.429177675s: waiting for machine to come up
	I1105 18:04:26.738445   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.738953   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has current primary IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.739021   27131 main.go:141] libmachine: (ha-844661-m02) Found IP for machine: 192.168.39.38
	I1105 18:04:26.739034   27131 main.go:141] libmachine: (ha-844661-m02) Reserving static IP address...
	I1105 18:04:26.739350   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find host DHCP lease matching {name: "ha-844661-m02", mac: "52:54:00:46:71:ad", ip: "192.168.39.38"} in network mk-ha-844661
	I1105 18:04:26.812299   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Getting to WaitForSSH function...
	I1105 18:04:26.812324   27131 main.go:141] libmachine: (ha-844661-m02) Reserved static IP address: 192.168.39.38
	I1105 18:04:26.812336   27131 main.go:141] libmachine: (ha-844661-m02) Waiting for SSH to be available...
	I1105 18:04:26.815175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.815513   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661
	I1105 18:04:26.815540   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find defined IP address of network mk-ha-844661 interface with MAC address 52:54:00:46:71:ad
	I1105 18:04:26.815668   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH client type: external
	I1105 18:04:26.815699   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa (-rw-------)
	I1105 18:04:26.815752   27131 main.go:141] libmachine: (ha-844661-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:04:26.815781   27131 main.go:141] libmachine: (ha-844661-m02) DBG | About to run SSH command:
	I1105 18:04:26.815798   27131 main.go:141] libmachine: (ha-844661-m02) DBG | exit 0
	I1105 18:04:26.819693   27131 main.go:141] libmachine: (ha-844661-m02) DBG | SSH cmd err, output: exit status 255: 
	I1105 18:04:26.819710   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1105 18:04:26.819733   27131 main.go:141] libmachine: (ha-844661-m02) DBG | command : exit 0
	I1105 18:04:26.819747   27131 main.go:141] libmachine: (ha-844661-m02) DBG | err     : exit status 255
	I1105 18:04:26.819758   27131 main.go:141] libmachine: (ha-844661-m02) DBG | output  : 
	I1105 18:04:29.821203   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Getting to WaitForSSH function...
	I1105 18:04:29.823337   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.823729   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:29.823762   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.823872   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH client type: external
	I1105 18:04:29.823894   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa (-rw-------)
	I1105 18:04:29.823922   27131 main.go:141] libmachine: (ha-844661-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:04:29.823940   27131 main.go:141] libmachine: (ha-844661-m02) DBG | About to run SSH command:
	I1105 18:04:29.823952   27131 main.go:141] libmachine: (ha-844661-m02) DBG | exit 0
	I1105 18:04:29.951085   27131 main.go:141] libmachine: (ha-844661-m02) DBG | SSH cmd err, output: <nil>: 
	I1105 18:04:29.951342   27131 main.go:141] libmachine: (ha-844661-m02) KVM machine creation complete!
	I1105 18:04:29.951700   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:29.952363   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:29.952587   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:29.952760   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:04:29.952794   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetState
	I1105 18:04:29.954134   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:04:29.954148   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:04:29.954153   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:04:29.954158   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:29.956382   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.956701   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:29.956727   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.956885   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:29.957041   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:29.957158   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:29.957245   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:29.957384   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:29.957587   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:29.957598   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:04:30.062109   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:04:30.062134   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:04:30.062144   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.064857   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.065391   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.065423   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.065611   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.065805   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.065970   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.066128   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.066292   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.066496   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.066512   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:04:30.175484   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:04:30.175559   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:04:30.175573   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:04:30.175583   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.175860   27131 buildroot.go:166] provisioning hostname "ha-844661-m02"
	I1105 18:04:30.175892   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.176101   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.178534   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.178884   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.178952   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.179036   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.179212   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.179364   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.179519   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.179693   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.179914   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.179935   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661-m02 && echo "ha-844661-m02" | sudo tee /etc/hostname
	I1105 18:04:30.302286   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661-m02
	
	I1105 18:04:30.302313   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.305041   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.305376   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.305397   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.305565   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.305735   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.305864   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.306027   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.306153   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.306345   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.306368   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:04:30.418880   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:04:30.418913   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:04:30.418933   27131 buildroot.go:174] setting up certificates
	I1105 18:04:30.418944   27131 provision.go:84] configureAuth start
	I1105 18:04:30.418958   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.419230   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:30.421818   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.422198   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.422218   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.422357   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.424553   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.424893   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.424934   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.425058   27131 provision.go:143] copyHostCerts
	I1105 18:04:30.425085   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:04:30.425123   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:04:30.425135   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:04:30.425209   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:04:30.425294   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:04:30.425312   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:04:30.425316   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:04:30.425339   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:04:30.425392   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:04:30.425411   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:04:30.425417   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:04:30.425437   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:04:30.425500   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661-m02 san=[127.0.0.1 192.168.39.38 ha-844661-m02 localhost minikube]
	I1105 18:04:30.669687   27131 provision.go:177] copyRemoteCerts
	I1105 18:04:30.669745   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:04:30.669767   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.672398   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.672764   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.672792   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.672964   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.673166   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.673319   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.673440   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:30.757634   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:04:30.757707   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:04:30.779929   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:04:30.779991   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:04:30.802282   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:04:30.802340   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:04:30.824080   27131 provision.go:87] duration metric: took 405.122043ms to configureAuth
	I1105 18:04:30.824105   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:04:30.824267   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:30.824337   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.826767   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.827187   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.827210   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.827374   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.827574   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.827761   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.827911   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.828074   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.828241   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.828257   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:04:31.054134   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:04:31.054167   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:04:31.054177   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetURL
	I1105 18:04:31.055397   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using libvirt version 6000000
	I1105 18:04:31.057579   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.057909   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.057942   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.058035   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:04:31.058055   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:04:31.058063   27131 client.go:171] duration metric: took 27.788882282s to LocalClient.Create
	I1105 18:04:31.058089   27131 start.go:167] duration metric: took 27.788944247s to libmachine.API.Create "ha-844661"
	I1105 18:04:31.058102   27131 start.go:293] postStartSetup for "ha-844661-m02" (driver="kvm2")
	I1105 18:04:31.058116   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:04:31.058140   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.058392   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:04:31.058416   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.060812   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.061181   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.061207   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.061372   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.061520   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.061638   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.061750   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.141343   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:04:31.145282   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:04:31.145305   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:04:31.145386   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:04:31.145475   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:04:31.145487   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:04:31.145583   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:04:31.154867   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:04:31.177214   27131 start.go:296] duration metric: took 119.098287ms for postStartSetup
	I1105 18:04:31.177266   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:31.177795   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:31.180218   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.180581   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.180609   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.180893   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:31.181127   27131 start.go:128] duration metric: took 27.931509235s to createHost
	I1105 18:04:31.181151   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.183589   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.183931   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.183977   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.184093   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.184255   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.184473   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.184627   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.184776   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:31.184927   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:31.184936   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:04:31.291832   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829871.274251077
	
	I1105 18:04:31.291862   27131 fix.go:216] guest clock: 1730829871.274251077
	I1105 18:04:31.291873   27131 fix.go:229] Guest: 2024-11-05 18:04:31.274251077 +0000 UTC Remote: 2024-11-05 18:04:31.181141215 +0000 UTC m=+70.565834196 (delta=93.109862ms)
	I1105 18:04:31.291893   27131 fix.go:200] guest clock delta is within tolerance: 93.109862ms
	I1105 18:04:31.291902   27131 start.go:83] releasing machines lock for "ha-844661-m02", held for 28.042391542s
	I1105 18:04:31.291933   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.292188   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:31.294847   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.295152   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.295182   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.297372   27131 out.go:177] * Found network options:
	I1105 18:04:31.298882   27131 out.go:177]   - NO_PROXY=192.168.39.48
	W1105 18:04:31.300182   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:04:31.300214   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.300744   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.300953   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.301049   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:04:31.301078   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	W1105 18:04:31.301139   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:04:31.301229   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:04:31.301249   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.303834   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304115   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304147   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.304164   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304340   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.304518   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.304656   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.304683   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304705   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.304817   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.304875   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.304966   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.305123   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.305293   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.537813   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:04:31.543318   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:04:31.543380   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:04:31.558192   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:04:31.558214   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:04:31.558265   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:04:31.574444   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:04:31.588020   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:04:31.588073   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:04:31.601225   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:04:31.614872   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:04:31.742673   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:04:31.906474   27131 docker.go:233] disabling docker service ...
	I1105 18:04:31.906547   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:04:31.920407   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:04:31.932829   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:04:32.065646   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:04:32.198693   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:04:32.211636   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:04:32.228537   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:04:32.228604   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.238359   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:04:32.238426   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.248245   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.258019   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.267772   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:04:32.277903   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.287745   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.304428   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.315166   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:04:32.324687   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:04:32.324739   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:04:32.338701   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:04:32.349299   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:32.473469   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:04:32.562263   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:04:32.562341   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:04:32.567966   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:04:32.568012   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:04:32.571415   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:04:32.608501   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:04:32.608591   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:04:32.636314   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:04:32.664649   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:04:32.666073   27131 out.go:177]   - env NO_PROXY=192.168.39.48
	I1105 18:04:32.667578   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:32.670054   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:32.670404   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:32.670434   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:32.670640   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:04:32.675107   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:04:32.687100   27131 mustload.go:65] Loading cluster: ha-844661
	I1105 18:04:32.687313   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:32.687563   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:32.687614   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:32.702173   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I1105 18:04:32.702544   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:32.703040   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:32.703059   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:32.703356   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:32.703527   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:32.705121   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:32.705395   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:32.705427   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:32.719590   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I1105 18:04:32.719963   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:32.720450   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:32.720471   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:32.720753   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:32.720928   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:32.721076   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.38
	I1105 18:04:32.721087   27131 certs.go:194] generating shared ca certs ...
	I1105 18:04:32.721099   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.721216   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:04:32.721253   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:04:32.721262   27131 certs.go:256] generating profile certs ...
	I1105 18:04:32.721325   27131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:04:32.721348   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8
	I1105 18:04:32.721359   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.254]
	I1105 18:04:32.817294   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 ...
	I1105 18:04:32.817319   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8: {Name:mk45feacdbeaf35fb15921aeeafdbedf19f7f2ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.817474   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8 ...
	I1105 18:04:32.817487   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8: {Name:mkf0dcf762cb289770c94346689eba9d112e92a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.817551   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:04:32.817676   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:04:32.817799   27131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:04:32.817813   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:04:32.817827   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:04:32.817838   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:04:32.817853   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:04:32.817867   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:04:32.817879   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:04:32.817890   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:04:32.817899   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:04:32.817954   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:04:32.817983   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:04:32.817992   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:04:32.818014   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:04:32.818034   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:04:32.818055   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:04:32.818093   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:04:32.818118   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:04:32.818132   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:04:32.818145   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:32.818175   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:32.821627   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:32.822087   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:32.822115   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:32.822324   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:32.822514   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:32.822635   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:32.822754   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:32.895384   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:04:32.901151   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:04:32.911563   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:04:32.916135   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1105 18:04:32.926023   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:04:32.929795   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:04:32.939479   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:04:32.943460   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:04:32.953743   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:04:32.957464   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:04:32.967126   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:04:32.971370   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 18:04:32.981265   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:04:33.005948   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:04:33.028537   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:04:33.051691   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:04:33.077296   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 18:04:33.099924   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:04:33.122118   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:04:33.144496   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:04:33.167061   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:04:33.189719   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:04:33.212311   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:04:33.234431   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:04:33.249569   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1105 18:04:33.264947   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:04:33.280382   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:04:33.295047   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:04:33.310658   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 18:04:33.325227   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:04:33.340438   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:04:33.345637   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:04:33.355163   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.359277   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.359332   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.364640   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:04:33.374197   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:04:33.383883   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.388205   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.388269   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.393534   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:04:33.403611   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:04:33.413496   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.417522   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.417572   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.422911   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:04:33.432783   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:04:33.436475   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:04:33.436531   27131 kubeadm.go:934] updating node {m02 192.168.39.38 8443 v1.31.2 crio true true} ...
	I1105 18:04:33.436634   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:04:33.436658   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:04:33.436695   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:04:33.453065   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:04:33.453148   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:04:33.453221   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:04:33.462691   27131 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 18:04:33.462762   27131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 18:04:33.472553   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 18:04:33.472563   27131 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1105 18:04:33.472583   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:04:33.472584   27131 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1105 18:04:33.472655   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:04:33.477105   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 18:04:33.477133   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 18:04:34.400283   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:04:34.400361   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:04:34.405010   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 18:04:34.405045   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 18:04:34.538786   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:04:34.578282   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:04:34.578382   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:04:34.588498   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 18:04:34.588540   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 18:04:34.951438   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:04:34.960448   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1105 18:04:34.976680   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:04:34.992424   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:04:35.007877   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:04:35.011593   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:04:35.023033   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:35.153794   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:04:35.171325   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:35.171790   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:35.171844   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:35.187008   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I1105 18:04:35.187511   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:35.188000   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:35.188021   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:35.188401   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:35.188593   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:35.188755   27131 start.go:317] joinCluster: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:04:35.188861   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 18:04:35.188876   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:35.192373   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:35.193007   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:35.193036   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:35.193153   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:35.193322   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:35.193493   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:35.193633   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:35.352325   27131 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:35.352369   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token io85g1.ce9beps1a5sdfopc --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m02 --control-plane --apiserver-advertise-address=192.168.39.38 --apiserver-bind-port=8443"
	I1105 18:04:56.900009   27131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token io85g1.ce9beps1a5sdfopc --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m02 --control-plane --apiserver-advertise-address=192.168.39.38 --apiserver-bind-port=8443": (21.547609543s)
	I1105 18:04:56.900049   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 18:04:57.434153   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661-m02 minikube.k8s.io/updated_at=2024_11_05T18_04_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=false
	I1105 18:04:57.562849   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844661-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 18:04:57.694503   27131 start.go:319] duration metric: took 22.505743601s to joinCluster
	I1105 18:04:57.694592   27131 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:57.694912   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:57.695940   27131 out.go:177] * Verifying Kubernetes components...
	I1105 18:04:57.697102   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:57.983429   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:04:58.029548   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:58.029888   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:04:58.029994   27131 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.48:8443
	I1105 18:04:58.030271   27131 node_ready.go:35] waiting up to 6m0s for node "ha-844661-m02" to be "Ready" ...
	I1105 18:04:58.030407   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:58.030418   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:58.030429   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:58.030436   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:58.043836   27131 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1105 18:04:58.531097   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:58.531124   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:58.531135   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:58.531142   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:58.543712   27131 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1105 18:04:59.030878   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:59.030899   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:59.030908   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:59.030912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:59.035656   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:04:59.530596   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:59.530621   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:59.530633   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:59.530639   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:59.534120   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:00.030984   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:00.031006   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:00.031014   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:00.031017   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:00.034282   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:00.035034   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:00.530821   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:00.530846   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:00.530858   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:00.530864   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:00.536618   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:05:01.031310   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:01.031331   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:01.031340   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:01.031345   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:01.034641   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:01.530557   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:01.530578   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:01.530588   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:01.530595   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:01.539049   27131 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1105 18:05:02.031172   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:02.031197   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:02.031206   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:02.031210   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:02.034664   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:02.035295   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:02.531134   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:02.531158   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:02.531168   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:02.531173   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:02.534691   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:03.030649   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:03.030676   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:03.030684   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:03.030689   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:03.034294   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:03.531341   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:03.531362   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:03.531370   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:03.531374   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:03.534345   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:04.031389   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:04.031412   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:04.031420   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:04.031425   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:04.034432   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:04.531089   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:04.531121   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:04.531130   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:04.531134   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:04.534592   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:04.535270   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:05.030583   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:05.030606   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:05.030614   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:05.030618   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:05.034321   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:05.530714   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:05.530735   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:05.530744   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:05.530748   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:05.534305   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:06.031071   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:06.031093   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:06.031101   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:06.031105   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:06.034416   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:06.531473   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:06.531497   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:06.531506   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:06.531513   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:06.534473   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:07.030494   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:07.030518   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:07.030526   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:07.030530   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:07.033934   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:07.034429   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:07.530834   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:07.530861   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:07.530871   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:07.530876   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:07.534136   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:08.031065   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:08.031086   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:08.031094   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:08.031097   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:08.034490   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:08.530752   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:08.530774   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:08.530782   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:08.530787   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:08.534189   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:09.030956   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:09.030998   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:09.031007   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:09.031013   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:09.034514   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:09.035140   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:09.531531   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:09.531558   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:09.531569   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:09.531577   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:09.534682   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:10.030566   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:10.030603   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:10.030611   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:10.030615   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:10.034288   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:10.530760   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:10.530786   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:10.530797   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:10.530803   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:10.535094   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:11.031135   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:11.031156   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:11.031164   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:11.031167   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:11.034996   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:11.035590   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:11.530958   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:11.531025   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:11.531033   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:11.531036   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:11.534280   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:12.031192   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:12.031217   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:12.031226   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:12.031229   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:12.034799   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:12.530835   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:12.530859   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:12.530866   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:12.530871   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:12.535212   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:13.031138   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:13.031161   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:13.031168   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:13.031174   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:13.035138   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:13.035640   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:13.531336   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:13.531361   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:13.531372   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:13.531377   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:13.534343   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:14.031248   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:14.031269   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:14.031277   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:14.031280   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:14.034318   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:14.531121   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:14.531144   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:14.531152   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:14.531156   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:14.534522   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.031444   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:15.031471   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:15.031481   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:15.031485   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:15.035107   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.531231   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:15.531259   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:15.531295   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:15.531301   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:15.534694   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.535240   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:16.031143   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:16.031166   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:16.031174   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:16.031178   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:16.034542   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:16.530558   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:16.530585   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:16.530592   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:16.530596   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:16.534438   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.031334   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.031354   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.031363   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.031377   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.034859   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.530585   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.530609   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.530617   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.530621   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.534242   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.534822   27131 node_ready.go:49] node "ha-844661-m02" has status "Ready":"True"
	I1105 18:05:17.534842   27131 node_ready.go:38] duration metric: took 19.504524126s for node "ha-844661-m02" to be "Ready" ...
	I1105 18:05:17.534853   27131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:05:17.534933   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:17.534945   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.534955   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.534962   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.539957   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:17.545365   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.545456   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4bdfz
	I1105 18:05:17.545468   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.545479   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.545485   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.548667   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.549324   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.549340   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.549350   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.549355   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.552460   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.553059   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.553079   27131 pod_ready.go:82] duration metric: took 7.687809ms for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.553089   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.553143   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5g97
	I1105 18:05:17.553151   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.553157   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.553161   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.556133   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.556688   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.556701   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.556708   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.556711   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.559655   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.560102   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.560125   27131 pod_ready.go:82] duration metric: took 7.028626ms for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.560138   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.560192   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661
	I1105 18:05:17.560200   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.560207   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.560211   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.563041   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.563593   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.563605   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.563612   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.563617   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.566382   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.566799   27131 pod_ready.go:93] pod "etcd-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.566816   27131 pod_ready.go:82] duration metric: took 6.672004ms for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.566824   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.566881   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m02
	I1105 18:05:17.566890   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.566897   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.566901   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.570076   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.570614   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.570630   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.570639   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.570644   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.574134   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.574566   27131 pod_ready.go:93] pod "etcd-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.574584   27131 pod_ready.go:82] duration metric: took 7.753168ms for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.574604   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.730613   27131 request.go:632] Waited for 155.951288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:05:17.730716   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:05:17.730738   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.730750   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.730756   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.734460   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.931599   27131 request.go:632] Waited for 196.455308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.931691   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.931703   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.931714   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.931720   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.935472   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.936248   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.936270   27131 pod_ready.go:82] duration metric: took 361.658171ms for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.936283   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.131401   27131 request.go:632] Waited for 195.044956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:05:18.131499   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:05:18.131506   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.131514   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.131520   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.135482   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.331525   27131 request.go:632] Waited for 195.194468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:18.331593   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:18.331598   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.331605   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.331610   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.334692   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.335419   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:18.335438   27131 pod_ready.go:82] duration metric: took 399.143957ms for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.335449   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.530629   27131 request.go:632] Waited for 195.065538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:05:18.530715   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:05:18.530724   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.530734   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.530747   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.534793   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:18.731049   27131 request.go:632] Waited for 195.44458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:18.731128   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:18.731134   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.731143   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.731148   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.734646   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.735269   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:18.735297   27131 pod_ready.go:82] duration metric: took 399.840715ms for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.735311   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.931233   27131 request.go:632] Waited for 195.850053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:05:18.931303   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:05:18.931310   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.931320   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.931326   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.935301   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.131408   27131 request.go:632] Waited for 195.30965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.131471   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.131476   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.131483   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.131487   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.134983   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.135599   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.135639   27131 pod_ready.go:82] duration metric: took 400.298272ms for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.135650   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.330670   27131 request.go:632] Waited for 194.9293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:05:19.330729   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:05:19.330734   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.330741   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.330745   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.334278   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.531215   27131 request.go:632] Waited for 196.368669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:19.531275   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:19.531280   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.531287   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.531290   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.535032   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.535778   27131 pod_ready.go:93] pod "kube-proxy-pjpkh" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.535799   27131 pod_ready.go:82] duration metric: took 400.142488ms for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.535811   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.730859   27131 request.go:632] Waited for 194.981031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:05:19.730957   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:05:19.730981   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.730993   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.731003   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.734505   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.931630   27131 request.go:632] Waited for 196.356772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.931695   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.931703   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.931713   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.931721   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.934664   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:19.935138   27131 pod_ready.go:93] pod "kube-proxy-zsbfs" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.935158   27131 pod_ready.go:82] duration metric: took 399.338721ms for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.935171   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.131253   27131 request.go:632] Waited for 196.012842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:05:20.131339   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:05:20.131346   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.131354   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.131365   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.135136   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.331213   27131 request.go:632] Waited for 195.465792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:20.331270   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:20.331276   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.331283   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.331287   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.334310   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.334872   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:20.334894   27131 pod_ready.go:82] duration metric: took 399.711008ms for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.334908   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.531014   27131 request.go:632] Waited for 195.998146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:05:20.531072   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:05:20.531077   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.531084   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.531092   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.534503   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.731389   27131 request.go:632] Waited for 196.312857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:20.731476   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:20.731488   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.731496   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.731502   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.734866   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.735369   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:20.735387   27131 pod_ready.go:82] duration metric: took 400.467875ms for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.735398   27131 pod_ready.go:39] duration metric: took 3.200533347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:05:20.735415   27131 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:05:20.735464   27131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:05:20.751422   27131 api_server.go:72] duration metric: took 23.056783291s to wait for apiserver process to appear ...
	I1105 18:05:20.751455   27131 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:05:20.751507   27131 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1105 18:05:20.755872   27131 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1105 18:05:20.755957   27131 round_trippers.go:463] GET https://192.168.39.48:8443/version
	I1105 18:05:20.755969   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.755980   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.755990   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.756842   27131 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 18:05:20.756943   27131 api_server.go:141] control plane version: v1.31.2
	I1105 18:05:20.756968   27131 api_server.go:131] duration metric: took 5.494459ms to wait for apiserver health ...
	I1105 18:05:20.756978   27131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:05:20.930580   27131 request.go:632] Waited for 173.520285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:20.930658   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:20.930664   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.930672   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.930676   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.935815   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:05:20.939904   27131 system_pods.go:59] 17 kube-system pods found
	I1105 18:05:20.939939   27131 system_pods.go:61] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:05:20.939945   27131 system_pods.go:61] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:05:20.939949   27131 system_pods.go:61] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:05:20.939952   27131 system_pods.go:61] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:05:20.939955   27131 system_pods.go:61] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:05:20.939959   27131 system_pods.go:61] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:05:20.939962   27131 system_pods.go:61] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:05:20.939965   27131 system_pods.go:61] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:05:20.939968   27131 system_pods.go:61] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:05:20.939977   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:05:20.939981   27131 system_pods.go:61] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:05:20.939984   27131 system_pods.go:61] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:05:20.939989   27131 system_pods.go:61] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:05:20.939992   27131 system_pods.go:61] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:05:20.939997   27131 system_pods.go:61] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:05:20.940003   27131 system_pods.go:61] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:05:20.940006   27131 system_pods.go:61] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:05:20.940012   27131 system_pods.go:74] duration metric: took 183.024873ms to wait for pod list to return data ...
	I1105 18:05:20.940022   27131 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:05:21.131476   27131 request.go:632] Waited for 191.3776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:05:21.131535   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:05:21.131540   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.131548   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.131552   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.135052   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:21.135309   27131 default_sa.go:45] found service account: "default"
	I1105 18:05:21.135328   27131 default_sa.go:55] duration metric: took 195.299598ms for default service account to be created ...
	I1105 18:05:21.135339   27131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:05:21.330735   27131 request.go:632] Waited for 195.314096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:21.330794   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:21.330799   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.330807   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.330810   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.335501   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:21.339693   27131 system_pods.go:86] 17 kube-system pods found
	I1105 18:05:21.339720   27131 system_pods.go:89] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:05:21.339726   27131 system_pods.go:89] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:05:21.339731   27131 system_pods.go:89] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:05:21.339734   27131 system_pods.go:89] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:05:21.339738   27131 system_pods.go:89] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:05:21.339741   27131 system_pods.go:89] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:05:21.339745   27131 system_pods.go:89] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:05:21.339748   27131 system_pods.go:89] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:05:21.339751   27131 system_pods.go:89] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:05:21.339755   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:05:21.339759   27131 system_pods.go:89] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:05:21.339762   27131 system_pods.go:89] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:05:21.339765   27131 system_pods.go:89] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:05:21.339769   27131 system_pods.go:89] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:05:21.339774   27131 system_pods.go:89] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:05:21.339779   27131 system_pods.go:89] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:05:21.339782   27131 system_pods.go:89] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:05:21.339788   27131 system_pods.go:126] duration metric: took 204.442408ms to wait for k8s-apps to be running ...
	I1105 18:05:21.339802   27131 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:05:21.339842   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:05:21.354615   27131 system_svc.go:56] duration metric: took 14.795984ms WaitForService to wait for kubelet
	I1105 18:05:21.354651   27131 kubeadm.go:582] duration metric: took 23.660015871s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:05:21.354696   27131 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:05:21.531068   27131 request.go:632] Waited for 176.291328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes
	I1105 18:05:21.531146   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes
	I1105 18:05:21.531151   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.531159   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.531164   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.534798   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:21.535495   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:05:21.535541   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:05:21.535562   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:05:21.535565   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:05:21.535570   27131 node_conditions.go:105] duration metric: took 180.868401ms to run NodePressure ...
	I1105 18:05:21.535585   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:05:21.535607   27131 start.go:255] writing updated cluster config ...
	I1105 18:05:21.537763   27131 out.go:201] 
	I1105 18:05:21.539166   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:21.539250   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:21.540660   27131 out.go:177] * Starting "ha-844661-m03" control-plane node in "ha-844661" cluster
	I1105 18:05:21.541637   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:05:21.541660   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:05:21.541776   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:05:21.541788   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:05:21.541886   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:21.542068   27131 start.go:360] acquireMachinesLock for ha-844661-m03: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:05:21.542109   27131 start.go:364] duration metric: took 21.826µs to acquireMachinesLock for "ha-844661-m03"
	I1105 18:05:21.542124   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:05:21.542209   27131 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1105 18:05:21.543860   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:05:21.543943   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:21.543975   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:21.559283   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1105 18:05:21.559671   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:21.560085   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:21.560107   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:21.560440   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:21.560618   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:21.560762   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:21.560967   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:05:21.560994   27131 client.go:168] LocalClient.Create starting
	I1105 18:05:21.561031   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:05:21.561079   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:05:21.561096   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:05:21.561164   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:05:21.561192   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:05:21.561207   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:05:21.561232   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:05:21.561244   27131 main.go:141] libmachine: (ha-844661-m03) Calling .PreCreateCheck
	I1105 18:05:21.561482   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:21.561876   27131 main.go:141] libmachine: Creating machine...
	I1105 18:05:21.561887   27131 main.go:141] libmachine: (ha-844661-m03) Calling .Create
	I1105 18:05:21.562039   27131 main.go:141] libmachine: (ha-844661-m03) Creating KVM machine...
	I1105 18:05:21.563199   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found existing default KVM network
	I1105 18:05:21.563316   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found existing private KVM network mk-ha-844661
	I1105 18:05:21.563415   27131 main.go:141] libmachine: (ha-844661-m03) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 ...
	I1105 18:05:21.563439   27131 main.go:141] libmachine: (ha-844661-m03) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:05:21.563512   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.563393   27902 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:05:21.563587   27131 main.go:141] libmachine: (ha-844661-m03) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:05:21.796365   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.796229   27902 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa...
	I1105 18:05:21.882674   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.882568   27902 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/ha-844661-m03.rawdisk...
	I1105 18:05:21.882702   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Writing magic tar header
	I1105 18:05:21.882713   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Writing SSH key tar header
	I1105 18:05:21.882768   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.882708   27902 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 ...
	I1105 18:05:21.882834   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03
	I1105 18:05:21.882863   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 (perms=drwx------)
	I1105 18:05:21.882876   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:05:21.882896   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:05:21.882908   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:05:21.882922   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:05:21.882944   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:05:21.882956   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:05:21.883017   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home
	I1105 18:05:21.883034   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Skipping /home - not owner
	I1105 18:05:21.883044   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:05:21.883057   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:05:21.883070   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:05:21.883081   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:05:21.883089   27131 main.go:141] libmachine: (ha-844661-m03) Creating domain...
	I1105 18:05:21.883931   27131 main.go:141] libmachine: (ha-844661-m03) define libvirt domain using xml: 
	I1105 18:05:21.883952   27131 main.go:141] libmachine: (ha-844661-m03) <domain type='kvm'>
	I1105 18:05:21.883976   27131 main.go:141] libmachine: (ha-844661-m03)   <name>ha-844661-m03</name>
	I1105 18:05:21.883997   27131 main.go:141] libmachine: (ha-844661-m03)   <memory unit='MiB'>2200</memory>
	I1105 18:05:21.884009   27131 main.go:141] libmachine: (ha-844661-m03)   <vcpu>2</vcpu>
	I1105 18:05:21.884020   27131 main.go:141] libmachine: (ha-844661-m03)   <features>
	I1105 18:05:21.884028   27131 main.go:141] libmachine: (ha-844661-m03)     <acpi/>
	I1105 18:05:21.884038   27131 main.go:141] libmachine: (ha-844661-m03)     <apic/>
	I1105 18:05:21.884046   27131 main.go:141] libmachine: (ha-844661-m03)     <pae/>
	I1105 18:05:21.884056   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884078   27131 main.go:141] libmachine: (ha-844661-m03)   </features>
	I1105 18:05:21.884099   27131 main.go:141] libmachine: (ha-844661-m03)   <cpu mode='host-passthrough'>
	I1105 18:05:21.884109   27131 main.go:141] libmachine: (ha-844661-m03)   
	I1105 18:05:21.884119   27131 main.go:141] libmachine: (ha-844661-m03)   </cpu>
	I1105 18:05:21.884129   27131 main.go:141] libmachine: (ha-844661-m03)   <os>
	I1105 18:05:21.884134   27131 main.go:141] libmachine: (ha-844661-m03)     <type>hvm</type>
	I1105 18:05:21.884144   27131 main.go:141] libmachine: (ha-844661-m03)     <boot dev='cdrom'/>
	I1105 18:05:21.884151   27131 main.go:141] libmachine: (ha-844661-m03)     <boot dev='hd'/>
	I1105 18:05:21.884159   27131 main.go:141] libmachine: (ha-844661-m03)     <bootmenu enable='no'/>
	I1105 18:05:21.884169   27131 main.go:141] libmachine: (ha-844661-m03)   </os>
	I1105 18:05:21.884183   27131 main.go:141] libmachine: (ha-844661-m03)   <devices>
	I1105 18:05:21.884200   27131 main.go:141] libmachine: (ha-844661-m03)     <disk type='file' device='cdrom'>
	I1105 18:05:21.884216   27131 main.go:141] libmachine: (ha-844661-m03)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/boot2docker.iso'/>
	I1105 18:05:21.884227   27131 main.go:141] libmachine: (ha-844661-m03)       <target dev='hdc' bus='scsi'/>
	I1105 18:05:21.884237   27131 main.go:141] libmachine: (ha-844661-m03)       <readonly/>
	I1105 18:05:21.884245   27131 main.go:141] libmachine: (ha-844661-m03)     </disk>
	I1105 18:05:21.884252   27131 main.go:141] libmachine: (ha-844661-m03)     <disk type='file' device='disk'>
	I1105 18:05:21.884260   27131 main.go:141] libmachine: (ha-844661-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:05:21.884267   27131 main.go:141] libmachine: (ha-844661-m03)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/ha-844661-m03.rawdisk'/>
	I1105 18:05:21.884274   27131 main.go:141] libmachine: (ha-844661-m03)       <target dev='hda' bus='virtio'/>
	I1105 18:05:21.884279   27131 main.go:141] libmachine: (ha-844661-m03)     </disk>
	I1105 18:05:21.884289   27131 main.go:141] libmachine: (ha-844661-m03)     <interface type='network'>
	I1105 18:05:21.884295   27131 main.go:141] libmachine: (ha-844661-m03)       <source network='mk-ha-844661'/>
	I1105 18:05:21.884305   27131 main.go:141] libmachine: (ha-844661-m03)       <model type='virtio'/>
	I1105 18:05:21.884313   27131 main.go:141] libmachine: (ha-844661-m03)     </interface>
	I1105 18:05:21.884318   27131 main.go:141] libmachine: (ha-844661-m03)     <interface type='network'>
	I1105 18:05:21.884326   27131 main.go:141] libmachine: (ha-844661-m03)       <source network='default'/>
	I1105 18:05:21.884330   27131 main.go:141] libmachine: (ha-844661-m03)       <model type='virtio'/>
	I1105 18:05:21.884337   27131 main.go:141] libmachine: (ha-844661-m03)     </interface>
	I1105 18:05:21.884341   27131 main.go:141] libmachine: (ha-844661-m03)     <serial type='pty'>
	I1105 18:05:21.884347   27131 main.go:141] libmachine: (ha-844661-m03)       <target port='0'/>
	I1105 18:05:21.884351   27131 main.go:141] libmachine: (ha-844661-m03)     </serial>
	I1105 18:05:21.884358   27131 main.go:141] libmachine: (ha-844661-m03)     <console type='pty'>
	I1105 18:05:21.884363   27131 main.go:141] libmachine: (ha-844661-m03)       <target type='serial' port='0'/>
	I1105 18:05:21.884377   27131 main.go:141] libmachine: (ha-844661-m03)     </console>
	I1105 18:05:21.884395   27131 main.go:141] libmachine: (ha-844661-m03)     <rng model='virtio'>
	I1105 18:05:21.884408   27131 main.go:141] libmachine: (ha-844661-m03)       <backend model='random'>/dev/random</backend>
	I1105 18:05:21.884417   27131 main.go:141] libmachine: (ha-844661-m03)     </rng>
	I1105 18:05:21.884432   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884441   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884448   27131 main.go:141] libmachine: (ha-844661-m03)   </devices>
	I1105 18:05:21.884457   27131 main.go:141] libmachine: (ha-844661-m03) </domain>
	I1105 18:05:21.884464   27131 main.go:141] libmachine: (ha-844661-m03) 
	I1105 18:05:21.890775   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:13:05:59 in network default
	I1105 18:05:21.891360   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring networks are active...
	I1105 18:05:21.891380   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:21.892107   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring network default is active
	I1105 18:05:21.892388   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring network mk-ha-844661 is active
	I1105 18:05:21.892764   27131 main.go:141] libmachine: (ha-844661-m03) Getting domain xml...
	I1105 18:05:21.893494   27131 main.go:141] libmachine: (ha-844661-m03) Creating domain...
	I1105 18:05:23.118308   27131 main.go:141] libmachine: (ha-844661-m03) Waiting to get IP...
	I1105 18:05:23.119070   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.119438   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.119465   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.119424   27902 retry.go:31] will retry after 298.334175ms: waiting for machine to come up
	I1105 18:05:23.419032   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.419605   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.419622   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.419554   27902 retry.go:31] will retry after 273.113851ms: waiting for machine to come up
	I1105 18:05:23.693944   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.694349   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.694376   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.694317   27902 retry.go:31] will retry after 416.726009ms: waiting for machine to come up
	I1105 18:05:24.112851   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:24.113218   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:24.113249   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:24.113181   27902 retry.go:31] will retry after 551.953216ms: waiting for machine to come up
	I1105 18:05:24.666824   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:24.667304   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:24.667333   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:24.667249   27902 retry.go:31] will retry after 466.975145ms: waiting for machine to come up
	I1105 18:05:25.135836   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:25.136271   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:25.136292   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:25.136228   27902 retry.go:31] will retry after 589.586585ms: waiting for machine to come up
	I1105 18:05:25.726897   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:25.727480   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:25.727508   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:25.727434   27902 retry.go:31] will retry after 968.18251ms: waiting for machine to come up
	I1105 18:05:26.697257   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:26.697626   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:26.697652   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:26.697586   27902 retry.go:31] will retry after 1.127611463s: waiting for machine to come up
	I1105 18:05:27.826904   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:27.827312   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:27.827340   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:27.827258   27902 retry.go:31] will retry after 1.342205842s: waiting for machine to come up
	I1105 18:05:29.171618   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:29.172079   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:29.172146   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:29.172073   27902 retry.go:31] will retry after 1.974625708s: waiting for machine to come up
	I1105 18:05:31.148071   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:31.148482   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:31.148499   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:31.148434   27902 retry.go:31] will retry after 2.71055754s: waiting for machine to come up
	I1105 18:05:33.861975   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:33.862458   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:33.862483   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:33.862417   27902 retry.go:31] will retry after 3.509037885s: waiting for machine to come up
	I1105 18:05:37.373198   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:37.373748   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:37.373778   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:37.373690   27902 retry.go:31] will retry after 4.502442692s: waiting for machine to come up
	I1105 18:05:41.878135   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.878636   27131 main.go:141] libmachine: (ha-844661-m03) Found IP for machine: 192.168.39.52
	I1105 18:05:41.878665   27131 main.go:141] libmachine: (ha-844661-m03) Reserving static IP address...
	I1105 18:05:41.878678   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has current primary IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.879129   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find host DHCP lease matching {name: "ha-844661-m03", mac: "52:54:00:62:70:0e", ip: "192.168.39.52"} in network mk-ha-844661
	I1105 18:05:41.955281   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Getting to WaitForSSH function...
	I1105 18:05:41.955317   27131 main.go:141] libmachine: (ha-844661-m03) Reserved static IP address: 192.168.39.52
	I1105 18:05:41.955331   27131 main.go:141] libmachine: (ha-844661-m03) Waiting for SSH to be available...
	I1105 18:05:41.957358   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.957752   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:41.957781   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.957992   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using SSH client type: external
	I1105 18:05:41.958020   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa (-rw-------)
	I1105 18:05:41.958098   27131 main.go:141] libmachine: (ha-844661-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:05:41.958121   27131 main.go:141] libmachine: (ha-844661-m03) DBG | About to run SSH command:
	I1105 18:05:41.958159   27131 main.go:141] libmachine: (ha-844661-m03) DBG | exit 0
	I1105 18:05:42.086743   27131 main.go:141] libmachine: (ha-844661-m03) DBG | SSH cmd err, output: <nil>: 
	I1105 18:05:42.087041   27131 main.go:141] libmachine: (ha-844661-m03) KVM machine creation complete!
	I1105 18:05:42.087332   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:42.087854   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:42.088045   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:42.088232   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:05:42.088247   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetState
	I1105 18:05:42.089254   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:05:42.089266   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:05:42.089278   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:05:42.089283   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.091449   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.091761   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.091789   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.091901   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.092048   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.092179   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.092313   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.092495   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.092748   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.092763   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:05:42.206064   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:05:42.206086   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:05:42.206094   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.208351   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.208732   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.208750   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.208928   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.209072   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.209271   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.209444   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.209598   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.209769   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.209780   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:05:42.323709   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:05:42.323865   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:05:42.323878   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:05:42.323888   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.324267   27131 buildroot.go:166] provisioning hostname "ha-844661-m03"
	I1105 18:05:42.324297   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.324481   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.327505   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.327833   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.327862   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.328041   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.328248   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.328422   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.328544   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.328776   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.329027   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.329041   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661-m03 && echo "ha-844661-m03" | sudo tee /etc/hostname
	I1105 18:05:42.457338   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661-m03
	
	I1105 18:05:42.457368   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.460928   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.461292   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.461321   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.461510   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.461681   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.461835   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.461969   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.462135   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.462324   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.462348   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:05:42.583532   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:05:42.583564   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:05:42.583578   27131 buildroot.go:174] setting up certificates
	I1105 18:05:42.583593   27131 provision.go:84] configureAuth start
	I1105 18:05:42.583602   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.583890   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:42.586719   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.587067   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.587099   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.587290   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.589736   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.590192   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.590227   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.590360   27131 provision.go:143] copyHostCerts
	I1105 18:05:42.590408   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:05:42.590449   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:05:42.590459   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:05:42.590538   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:05:42.590622   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:05:42.590645   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:05:42.590652   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:05:42.590675   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:05:42.590726   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:05:42.590742   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:05:42.590748   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:05:42.590768   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:05:42.590820   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661-m03 san=[127.0.0.1 192.168.39.52 ha-844661-m03 localhost minikube]
	I1105 18:05:42.925752   27131 provision.go:177] copyRemoteCerts
	I1105 18:05:42.925808   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:05:42.925833   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.928689   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.929066   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.929101   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.929303   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.929489   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.929666   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.929803   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.020278   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:05:43.020356   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:05:43.044012   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:05:43.044085   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:05:43.067535   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:05:43.067615   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:05:43.091055   27131 provision.go:87] duration metric: took 507.451446ms to configureAuth
	I1105 18:05:43.091084   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:05:43.091353   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:43.091482   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.094765   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.095169   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.095193   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.095384   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.095574   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.095740   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.095896   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.096067   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:43.096263   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:43.096284   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:05:43.325666   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:05:43.325693   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:05:43.325711   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetURL
	I1105 18:05:43.326946   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using libvirt version 6000000
	I1105 18:05:43.329691   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.330121   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.330146   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.330327   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:05:43.330347   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:05:43.330356   27131 client.go:171] duration metric: took 21.769352405s to LocalClient.Create
	I1105 18:05:43.330393   27131 start.go:167] duration metric: took 21.769425686s to libmachine.API.Create "ha-844661"
	I1105 18:05:43.330407   27131 start.go:293] postStartSetup for "ha-844661-m03" (driver="kvm2")
	I1105 18:05:43.330422   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:05:43.330439   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.330671   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:05:43.330693   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.332887   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.333189   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.333218   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.333427   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.333597   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.333764   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.333891   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.421747   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:05:43.425946   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:05:43.425980   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:05:43.426048   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:05:43.426118   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:05:43.426127   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:05:43.426241   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:05:43.436295   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:05:43.461822   27131 start.go:296] duration metric: took 131.400624ms for postStartSetup
	I1105 18:05:43.461911   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:43.462559   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:43.465039   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.465395   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.465419   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.465660   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:43.465861   27131 start.go:128] duration metric: took 21.923641121s to createHost
	I1105 18:05:43.465891   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.468236   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.468751   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.468776   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.468993   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.469148   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.469288   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.469410   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.469542   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:43.469719   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:43.469729   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:05:43.583301   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829943.559053309
	
	I1105 18:05:43.583330   27131 fix.go:216] guest clock: 1730829943.559053309
	I1105 18:05:43.583338   27131 fix.go:229] Guest: 2024-11-05 18:05:43.559053309 +0000 UTC Remote: 2024-11-05 18:05:43.465876826 +0000 UTC m=+142.850569806 (delta=93.176483ms)
	I1105 18:05:43.583357   27131 fix.go:200] guest clock delta is within tolerance: 93.176483ms
	I1105 18:05:43.583365   27131 start.go:83] releasing machines lock for "ha-844661-m03", held for 22.041249603s
	I1105 18:05:43.583392   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.583670   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:43.586387   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.586835   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.586865   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.589174   27131 out.go:177] * Found network options:
	I1105 18:05:43.590513   27131 out.go:177]   - NO_PROXY=192.168.39.48,192.168.39.38
	W1105 18:05:43.591696   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:05:43.591728   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:05:43.591742   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592264   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592439   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592540   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:05:43.592583   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	W1105 18:05:43.592659   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:05:43.592686   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:05:43.592773   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:05:43.592798   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.595358   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595711   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.595743   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595763   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595936   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.596109   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.596235   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.596238   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.596260   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.596402   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.596401   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.596521   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.596667   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.596795   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.836071   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:05:43.841664   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:05:43.841742   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:05:43.858022   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:05:43.858050   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:05:43.858129   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:05:43.874613   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:05:43.888461   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:05:43.888526   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:05:43.901586   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:05:43.914516   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:05:44.022716   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:05:44.162802   27131 docker.go:233] disabling docker service ...
	I1105 18:05:44.162867   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:05:44.178520   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:05:44.190518   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:05:44.307326   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:05:44.440411   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:05:44.453238   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:05:44.471519   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:05:44.471573   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.481424   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:05:44.481492   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.491154   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.500794   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.511947   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:05:44.521660   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.531075   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.547126   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.557037   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:05:44.565707   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:05:44.565772   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:05:44.580225   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:05:44.590720   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:05:44.720733   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:05:44.813635   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:05:44.813712   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:05:44.818398   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:05:44.818453   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:05:44.821924   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:05:44.862340   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:05:44.862414   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:05:44.888088   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:05:44.915450   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:05:44.916959   27131 out.go:177]   - env NO_PROXY=192.168.39.48
	I1105 18:05:44.918290   27131 out.go:177]   - env NO_PROXY=192.168.39.48,192.168.39.38
	I1105 18:05:44.919504   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:44.921870   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:44.922342   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:44.922369   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:44.922579   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:05:44.926550   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:05:44.938321   27131 mustload.go:65] Loading cluster: ha-844661
	I1105 18:05:44.938602   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:44.939019   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:44.939070   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:44.954536   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45821
	I1105 18:05:44.955060   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:44.955556   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:44.955581   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:44.955872   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:44.956050   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:05:44.957611   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:05:44.957920   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:44.957971   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:44.973679   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33387
	I1105 18:05:44.974166   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:44.974646   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:44.974660   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:44.974951   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:44.975198   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:05:44.975390   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.52
	I1105 18:05:44.975402   27131 certs.go:194] generating shared ca certs ...
	I1105 18:05:44.975424   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:44.975543   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:05:44.975579   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:05:44.975587   27131 certs.go:256] generating profile certs ...
	I1105 18:05:44.975659   27131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:05:44.975685   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b
	I1105 18:05:44.975700   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.52 192.168.39.254]
	I1105 18:05:45.201266   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b ...
	I1105 18:05:45.201297   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b: {Name:mk528e0260fc30831e80a622836a2ff38ea38838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:45.201463   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b ...
	I1105 18:05:45.201476   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b: {Name:mkf6f5a9f3c5c5cd5e56be42a7f99d1f16c92ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:45.201544   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:05:45.201685   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:05:45.201845   27131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:05:45.201861   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:05:45.201877   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:05:45.201896   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:05:45.201914   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:05:45.201928   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:05:45.201942   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:05:45.201954   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:05:45.215059   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:05:45.215144   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:05:45.215186   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:05:45.215194   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:05:45.215215   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:05:45.215240   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:05:45.215272   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:05:45.215314   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:05:45.215350   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.215374   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.215398   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.215435   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:05:45.218425   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:45.218874   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:05:45.218901   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:45.219093   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:05:45.219284   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:05:45.219433   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:05:45.219544   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:05:45.291312   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:05:45.296113   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:05:45.309256   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:05:45.313268   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1105 18:05:45.324891   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:05:45.328601   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:05:45.339115   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:05:45.343326   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:05:45.353973   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:05:45.357652   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:05:45.367881   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:05:45.371920   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 18:05:45.381431   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:05:45.405521   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:05:45.428099   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:05:45.450896   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:05:45.472444   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1105 18:05:45.494567   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:05:45.518941   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:05:45.542679   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:05:45.565272   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:05:45.586847   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:05:45.609171   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:05:45.631071   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:05:45.647046   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1105 18:05:45.662643   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:05:45.677589   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:05:45.693263   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:05:45.708513   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 18:05:45.723904   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:05:45.739595   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:05:45.744988   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:05:45.754754   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.759038   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.759097   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.764843   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:05:45.774526   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:05:45.784026   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.788019   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.788066   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.793328   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:05:45.803282   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:05:45.813203   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.817364   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.817407   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.822692   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:05:45.832731   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:05:45.836652   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:05:45.836705   27131 kubeadm.go:934] updating node {m03 192.168.39.52 8443 v1.31.2 crio true true} ...
	I1105 18:05:45.836816   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:05:45.836851   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:05:45.836896   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:05:45.851973   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:05:45.852033   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:05:45.852095   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:05:45.861565   27131 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 18:05:45.861624   27131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 18:05:45.871179   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1105 18:05:45.871192   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 18:05:45.871218   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:05:45.871230   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:05:45.871246   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1105 18:05:45.871262   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:05:45.871285   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:05:45.871314   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:05:45.885118   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:05:45.885168   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 18:05:45.885198   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 18:05:45.885198   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 18:05:45.885201   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:05:45.885224   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 18:05:45.895722   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 18:05:45.895762   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 18:05:46.776289   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:05:46.785676   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1105 18:05:46.804664   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:05:46.823256   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:05:46.839659   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:05:46.843739   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:05:46.855127   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:05:46.984151   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:05:47.002930   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:05:47.003372   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:47.003427   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:47.019365   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I1105 18:05:47.020121   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:47.020574   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:47.020595   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:47.020908   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:47.021095   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:05:47.021355   27131 start.go:317] joinCluster: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:05:47.021508   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 18:05:47.021529   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:05:47.024802   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:47.025266   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:05:47.025301   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:47.025485   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:05:47.025649   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:05:47.025818   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:05:47.025989   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:05:47.187808   27131 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:05:47.187862   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ywlsrk.n1qe1uf11bwul667 --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m03 --control-plane --apiserver-advertise-address=192.168.39.52 --apiserver-bind-port=8443"
	I1105 18:06:08.756523   27131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ywlsrk.n1qe1uf11bwul667 --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m03 --control-plane --apiserver-advertise-address=192.168.39.52 --apiserver-bind-port=8443": (21.568638959s)
	I1105 18:06:08.756554   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 18:06:09.321152   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661-m03 minikube.k8s.io/updated_at=2024_11_05T18_06_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=false
	I1105 18:06:09.429932   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844661-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 18:06:09.553648   27131 start.go:319] duration metric: took 22.532294884s to joinCluster
	I1105 18:06:09.553756   27131 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:06:09.554141   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:09.555396   27131 out.go:177] * Verifying Kubernetes components...
	I1105 18:06:09.556678   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:06:09.771512   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:06:09.788145   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:06:09.788384   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:06:09.788445   27131 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.48:8443
	I1105 18:06:09.788700   27131 node_ready.go:35] waiting up to 6m0s for node "ha-844661-m03" to be "Ready" ...
	I1105 18:06:09.788799   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:09.788806   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:09.788814   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:09.788817   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:09.792219   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:10.289451   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:10.289477   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:10.289489   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:10.289494   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:10.292860   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:10.789577   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:10.789602   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:10.789615   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:10.789623   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:10.793572   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.289465   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:11.289484   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:11.289492   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:11.289498   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:11.292734   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.789023   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:11.789052   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:11.789064   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:11.789070   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:11.792248   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.792884   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:12.289577   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:12.289596   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:12.289604   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:12.289609   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:12.292931   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:12.789594   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:12.789615   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:12.789623   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:12.789628   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:12.793282   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.288880   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:13.288900   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:13.288909   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:13.288912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:13.292354   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.789203   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:13.789228   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:13.789240   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:13.789244   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:13.792591   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.793128   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:14.289574   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:14.289596   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:14.289605   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:14.289610   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:14.292856   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:14.789821   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:14.789847   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:14.789858   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:14.789863   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:14.793134   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.289398   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:15.289420   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:15.289428   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:15.289433   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:15.292967   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.789567   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:15.789591   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:15.789602   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:15.789607   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:15.793208   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.793657   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:16.289022   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:16.289046   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:16.289056   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:16.289062   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:16.309335   27131 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1105 18:06:16.789461   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:16.789479   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:16.789488   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:16.789492   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:16.793000   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:17.289308   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:17.289333   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:17.289345   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:17.289354   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:17.292729   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:17.789752   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:17.789779   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:17.789791   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:17.789798   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:17.794196   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:17.794657   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:18.288931   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:18.288964   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:18.288972   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:18.288976   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:18.292090   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:18.789058   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:18.789080   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:18.789086   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:18.789090   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:18.792559   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:19.289923   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:19.289950   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:19.289961   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:19.289966   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:19.293279   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:19.789125   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:19.789153   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:19.789164   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:19.789170   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:19.792732   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:20.289126   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:20.289149   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:20.289157   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:20.289162   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:20.292641   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:20.293309   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:20.789527   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:20.789549   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:20.789557   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:20.789561   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:20.792849   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:21.289833   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:21.289856   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:21.289863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:21.289867   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:21.293665   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:21.789877   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:21.789900   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:21.789908   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:21.789912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:21.793341   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:22.289645   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:22.289664   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:22.289672   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:22.289676   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:22.292986   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:22.293503   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:22.789122   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:22.789148   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:22.789160   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:22.789164   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:22.792397   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:23.289550   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:23.289574   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:23.289584   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:23.289591   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:23.293009   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:23.789081   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:23.789104   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:23.789112   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:23.789116   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:23.792559   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:24.289408   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:24.289432   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:24.289444   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:24.289448   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:24.293655   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:24.294170   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:24.789552   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:24.789579   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:24.789592   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:24.789598   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:24.792779   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:25.289364   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:25.289386   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:25.289393   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:25.289398   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:25.293189   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:25.789622   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:25.789644   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:25.789652   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:25.789655   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:25.792920   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.288919   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:26.288944   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:26.288954   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:26.288961   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:26.292248   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.789720   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:26.789741   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:26.789749   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:26.789753   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:26.793339   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.793840   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:27.289627   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:27.289653   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:27.289664   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:27.289671   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:27.292896   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:27.789396   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:27.789418   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:27.789426   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:27.789430   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:27.793104   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.288926   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.288950   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.288958   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.288962   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.292349   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.292934   27131 node_ready.go:49] node "ha-844661-m03" has status "Ready":"True"
	I1105 18:06:28.292959   27131 node_ready.go:38] duration metric: took 18.504244816s for node "ha-844661-m03" to be "Ready" ...
	I1105 18:06:28.292967   27131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:28.293052   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:28.293062   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.293069   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.293073   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.298865   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:06:28.305101   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.305172   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4bdfz
	I1105 18:06:28.305180   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.305187   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.305191   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.308014   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.308823   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.308838   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.308845   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.308848   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.311202   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.311752   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.311769   27131 pod_ready.go:82] duration metric: took 6.646273ms for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.311778   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.311825   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5g97
	I1105 18:06:28.311833   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.311839   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.311842   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.314162   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.315006   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.315022   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.315032   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.315037   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.317112   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.317771   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.317790   27131 pod_ready.go:82] duration metric: took 6.006174ms for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.317799   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.317847   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661
	I1105 18:06:28.317855   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.317861   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.317869   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.320184   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.320779   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.320794   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.320801   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.320804   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.323022   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.323542   27131 pod_ready.go:93] pod "etcd-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.323560   27131 pod_ready.go:82] duration metric: took 5.754386ms for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.323568   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.323613   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m02
	I1105 18:06:28.323621   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.323627   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.323631   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.325924   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.326482   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:28.326496   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.326503   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.326510   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.328928   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.329392   27131 pod_ready.go:93] pod "etcd-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.329412   27131 pod_ready.go:82] duration metric: took 5.837481ms for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.329426   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.489824   27131 request.go:632] Waited for 160.309715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m03
	I1105 18:06:28.489893   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m03
	I1105 18:06:28.489899   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.489906   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.489914   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.493239   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.689345   27131 request.go:632] Waited for 195.357359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.689416   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.689422   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.689430   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.689436   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.692948   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.693449   27131 pod_ready.go:93] pod "etcd-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.693468   27131 pod_ready.go:82] duration metric: took 364.031884ms for pod "etcd-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.693488   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.889759   27131 request.go:632] Waited for 196.181442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:06:28.889818   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:06:28.889823   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.889830   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.889836   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.893294   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.089232   27131 request.go:632] Waited for 195.272157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:29.089332   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:29.089345   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.089355   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.089363   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.092371   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:29.093062   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.093081   27131 pod_ready.go:82] duration metric: took 399.581249ms for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.093095   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.289039   27131 request.go:632] Waited for 195.870378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:06:29.289108   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:06:29.289114   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.289121   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.289127   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.292782   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.489337   27131 request.go:632] Waited for 195.348089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:29.489423   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:29.489428   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.489439   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.489446   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.492721   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.493290   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.493309   27131 pod_ready.go:82] duration metric: took 400.203815ms for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.493320   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.689371   27131 request.go:632] Waited for 195.98498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m03
	I1105 18:06:29.689467   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m03
	I1105 18:06:29.689479   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.689489   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.689497   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.692955   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.888986   27131 request.go:632] Waited for 195.295088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:29.889053   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:29.889060   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.889071   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.889080   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.892048   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:29.892533   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.892549   27131 pod_ready.go:82] duration metric: took 399.221552ms for pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.892559   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.089669   27131 request.go:632] Waited for 197.039051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:06:30.089731   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:06:30.089736   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.089745   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.089749   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.093164   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.289306   27131 request.go:632] Waited for 195.324188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:30.289372   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:30.289384   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.289397   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.289407   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.292636   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.293206   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:30.293227   27131 pod_ready.go:82] duration metric: took 400.66121ms for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.293238   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.489536   27131 request.go:632] Waited for 196.217205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:06:30.489646   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:06:30.489658   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.489668   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.489675   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.493045   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.688919   27131 request.go:632] Waited for 195.135908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:30.688971   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:30.688976   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.688984   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.688988   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.692203   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.692968   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:30.692987   27131 pod_ready.go:82] duration metric: took 399.741193ms for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.693001   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.889370   27131 request.go:632] Waited for 196.304824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m03
	I1105 18:06:30.889450   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m03
	I1105 18:06:30.889457   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.889465   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.889472   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.892647   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.089803   27131 request.go:632] Waited for 196.376037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.089851   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.089855   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.089863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.089869   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.093035   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.093548   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.093568   27131 pod_ready.go:82] duration metric: took 400.558908ms for pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.093580   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mk9m" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.289696   27131 request.go:632] Waited for 196.055175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mk9m
	I1105 18:06:31.289756   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mk9m
	I1105 18:06:31.289761   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.289768   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.289772   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.293304   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.489478   27131 request.go:632] Waited for 195.351968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.489541   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.489549   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.489556   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.489562   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.492991   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.493563   27131 pod_ready.go:93] pod "kube-proxy-2mk9m" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.493582   27131 pod_ready.go:82] duration metric: took 399.995731ms for pod "kube-proxy-2mk9m" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.493592   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.689978   27131 request.go:632] Waited for 196.300604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:06:31.690038   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:06:31.690043   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.690050   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.690053   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.693380   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.889851   27131 request.go:632] Waited for 195.375559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:31.889905   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:31.889910   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.889917   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.889922   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.893474   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.894113   27131 pod_ready.go:93] pod "kube-proxy-pjpkh" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.894132   27131 pod_ready.go:82] duration metric: took 400.533639ms for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.894142   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.089665   27131 request.go:632] Waited for 195.450073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:06:32.089735   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:06:32.089740   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.089747   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.089751   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.093190   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.289235   27131 request.go:632] Waited for 195.339549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:32.289293   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:32.289310   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.289317   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.289321   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.292485   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.293147   27131 pod_ready.go:93] pod "kube-proxy-zsbfs" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:32.293172   27131 pod_ready.go:82] duration metric: took 399.02399ms for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.293182   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.489243   27131 request.go:632] Waited for 195.995375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:06:32.489308   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:06:32.489316   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.489324   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.489327   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.493003   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.689901   27131 request.go:632] Waited for 196.356448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:32.689953   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:32.689958   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.689966   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.689970   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.693190   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.693742   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:32.693763   27131 pod_ready.go:82] duration metric: took 400.573652ms for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.693777   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.889556   27131 request.go:632] Waited for 195.689425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:06:32.889607   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:06:32.889612   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.889620   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.889624   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.893476   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.089475   27131 request.go:632] Waited for 195.357977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:33.089527   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:33.089532   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.089539   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.089543   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.092888   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.093460   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:33.093481   27131 pod_ready.go:82] duration metric: took 399.697128ms for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.093491   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.289500   27131 request.go:632] Waited for 195.942997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m03
	I1105 18:06:33.289569   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m03
	I1105 18:06:33.289576   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.289585   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.289589   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.293636   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:33.489851   27131 request.go:632] Waited for 195.367744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:33.489908   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:33.489913   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.489920   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.489924   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.493512   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.494235   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:33.494258   27131 pod_ready.go:82] duration metric: took 400.759685ms for pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.494276   27131 pod_ready.go:39] duration metric: took 5.201298893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:33.494295   27131 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:06:33.494356   27131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:06:33.509380   27131 api_server.go:72] duration metric: took 23.955584698s to wait for apiserver process to appear ...
	I1105 18:06:33.509409   27131 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:06:33.509433   27131 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1105 18:06:33.514022   27131 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1105 18:06:33.514097   27131 round_trippers.go:463] GET https://192.168.39.48:8443/version
	I1105 18:06:33.514107   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.514114   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.514119   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.514958   27131 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 18:06:33.515041   27131 api_server.go:141] control plane version: v1.31.2
	I1105 18:06:33.515056   27131 api_server.go:131] duration metric: took 5.640397ms to wait for apiserver health ...
	I1105 18:06:33.515062   27131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:06:33.689459   27131 request.go:632] Waited for 174.322152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:33.689543   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:33.689554   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.689564   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.689570   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.695696   27131 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:06:33.701785   27131 system_pods.go:59] 24 kube-system pods found
	I1105 18:06:33.701817   27131 system_pods.go:61] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:06:33.701822   27131 system_pods.go:61] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:06:33.701826   27131 system_pods.go:61] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:06:33.701829   27131 system_pods.go:61] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:06:33.701832   27131 system_pods.go:61] "etcd-ha-844661-m03" [c8179289-e67f-4a2b-bba3-1387aa107d3e] Running
	I1105 18:06:33.701836   27131 system_pods.go:61] "kindnet-fzrh6" [985ef0b3-91cc-4965-a1f3-a8e468eba2ee] Running
	I1105 18:06:33.701839   27131 system_pods.go:61] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:06:33.701842   27131 system_pods.go:61] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:06:33.701845   27131 system_pods.go:61] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:06:33.701849   27131 system_pods.go:61] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:06:33.701852   27131 system_pods.go:61] "kube-apiserver-ha-844661-m03" [57a94b5d-466e-4d87-ba16-ceba58d65ee0] Running
	I1105 18:06:33.701858   27131 system_pods.go:61] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:06:33.701864   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:06:33.701868   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m03" [dcadcdf5-6004-49a9-800b-f27798ab06db] Running
	I1105 18:06:33.701872   27131 system_pods.go:61] "kube-proxy-2mk9m" [483f248e-9776-4c11-a088-a2cbd152602b] Running
	I1105 18:06:33.701875   27131 system_pods.go:61] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:06:33.701879   27131 system_pods.go:61] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:06:33.701882   27131 system_pods.go:61] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:06:33.701886   27131 system_pods.go:61] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:06:33.701889   27131 system_pods.go:61] "kube-scheduler-ha-844661-m03" [711f353f-ee82-4066-98ff-e3349082bf32] Running
	I1105 18:06:33.701894   27131 system_pods.go:61] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:06:33.701897   27131 system_pods.go:61] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:06:33.701900   27131 system_pods.go:61] "kube-vip-ha-844661-m03" [5ebe3d8b-e1e2-4d10-bf5c-d88148144dd1] Running
	I1105 18:06:33.701903   27131 system_pods.go:61] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:06:33.701909   27131 system_pods.go:74] duration metric: took 186.841773ms to wait for pod list to return data ...
	I1105 18:06:33.701919   27131 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:06:33.889363   27131 request.go:632] Waited for 187.358199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:06:33.889435   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:06:33.889442   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.889452   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.889459   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.893683   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:33.893791   27131 default_sa.go:45] found service account: "default"
	I1105 18:06:33.893804   27131 default_sa.go:55] duration metric: took 191.879725ms for default service account to be created ...
	I1105 18:06:33.893811   27131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:06:34.089215   27131 request.go:632] Waited for 195.345636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:34.089283   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:34.089291   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:34.089303   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:34.089323   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:34.096363   27131 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:06:34.102465   27131 system_pods.go:86] 24 kube-system pods found
	I1105 18:06:34.102491   27131 system_pods.go:89] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:06:34.102496   27131 system_pods.go:89] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:06:34.102501   27131 system_pods.go:89] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:06:34.102505   27131 system_pods.go:89] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:06:34.102508   27131 system_pods.go:89] "etcd-ha-844661-m03" [c8179289-e67f-4a2b-bba3-1387aa107d3e] Running
	I1105 18:06:34.102512   27131 system_pods.go:89] "kindnet-fzrh6" [985ef0b3-91cc-4965-a1f3-a8e468eba2ee] Running
	I1105 18:06:34.102515   27131 system_pods.go:89] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:06:34.102519   27131 system_pods.go:89] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:06:34.102522   27131 system_pods.go:89] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:06:34.102525   27131 system_pods.go:89] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:06:34.102529   27131 system_pods.go:89] "kube-apiserver-ha-844661-m03" [57a94b5d-466e-4d87-ba16-ceba58d65ee0] Running
	I1105 18:06:34.102533   27131 system_pods.go:89] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:06:34.102537   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:06:34.102541   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m03" [dcadcdf5-6004-49a9-800b-f27798ab06db] Running
	I1105 18:06:34.102545   27131 system_pods.go:89] "kube-proxy-2mk9m" [483f248e-9776-4c11-a088-a2cbd152602b] Running
	I1105 18:06:34.102551   27131 system_pods.go:89] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:06:34.102554   27131 system_pods.go:89] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:06:34.102557   27131 system_pods.go:89] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:06:34.102561   27131 system_pods.go:89] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:06:34.102564   27131 system_pods.go:89] "kube-scheduler-ha-844661-m03" [711f353f-ee82-4066-98ff-e3349082bf32] Running
	I1105 18:06:34.102569   27131 system_pods.go:89] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:06:34.102573   27131 system_pods.go:89] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:06:34.102578   27131 system_pods.go:89] "kube-vip-ha-844661-m03" [5ebe3d8b-e1e2-4d10-bf5c-d88148144dd1] Running
	I1105 18:06:34.102581   27131 system_pods.go:89] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:06:34.102586   27131 system_pods.go:126] duration metric: took 208.77013ms to wait for k8s-apps to be running ...
	I1105 18:06:34.102595   27131 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:06:34.102637   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:06:34.118557   27131 system_svc.go:56] duration metric: took 15.951864ms WaitForService to wait for kubelet
	I1105 18:06:34.118583   27131 kubeadm.go:582] duration metric: took 24.564791625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:06:34.118612   27131 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:06:34.288972   27131 request.go:632] Waited for 170.274451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes
	I1105 18:06:34.289022   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes
	I1105 18:06:34.289035   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:34.289055   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:34.289062   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:34.292646   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:34.294249   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294283   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294309   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294316   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294322   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294327   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294335   27131 node_conditions.go:105] duration metric: took 175.714114ms to run NodePressure ...
	I1105 18:06:34.294352   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:06:34.294390   27131 start.go:255] writing updated cluster config ...
	I1105 18:06:34.294711   27131 ssh_runner.go:195] Run: rm -f paused
	I1105 18:06:34.347073   27131 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 18:06:34.348891   27131 out.go:177] * Done! kubectl is now configured to use "ha-844661" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.177392785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830219177344430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=407c4f22-dcfe-4d41-90a9-e0ff44f34c7b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.177850223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a972526-036b-4f40-b2c7-ec5f55871114 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.177905067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a972526-036b-4f40-b2c7-ec5f55871114 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.178152552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a972526-036b-4f40-b2c7-ec5f55871114 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.218394724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d72117d0-6fdb-414e-9cb0-a0d7afc7c64e name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.218468454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d72117d0-6fdb-414e-9cb0-a0d7afc7c64e name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.219602518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e779151c-5f74-4556-82ec-7af3f51f2193 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.220085887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830219220063236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e779151c-5f74-4556-82ec-7af3f51f2193 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.220781544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1620153-ec43-47de-8051-553835a9dfb5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.220860453Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1620153-ec43-47de-8051-553835a9dfb5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.221111856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1620153-ec43-47de-8051-553835a9dfb5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.257092562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa42ebd8-2094-4b51-8e3b-7ded9d23b173 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.257221801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa42ebd8-2094-4b51-8e3b-7ded9d23b173 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.258441289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3898bede-7791-479c-9ae4-825813c2c380 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.258893026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830219258872289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3898bede-7791-479c-9ae4-825813c2c380 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.259482365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1e3ae96-8852-4b5e-bb13-3b5db1b91504 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.259572549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1e3ae96-8852-4b5e-bb13-3b5db1b91504 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.260013140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1e3ae96-8852-4b5e-bb13-3b5db1b91504 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.298572417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc843d7a-6747-4ea7-a606-e235b877e1d3 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.298663399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc843d7a-6747-4ea7-a606-e235b877e1d3 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.299884784Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f424ceba-75fc-4369-a587-40439485869f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.300784728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830219300666893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f424ceba-75fc-4369-a587-40439485869f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.301358105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e7b6846-63db-4873-8ab4-978a8c269dd9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.301453026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e7b6846-63db-4873-8ab4-978a8c269dd9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:19 ha-844661 crio[658]: time="2024-11-05 18:10:19.302005332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e7b6846-63db-4873-8ab4-978a8c269dd9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f547082b18e22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   27e18ae242703       busybox-7dff88458-lzhpc
	4504233c88e52       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   7b8c6b865e4b8       coredns-7c65d6cfc9-4bdfz
	2c9fc5d833b41       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   44bedf8a84dbf       coredns-7c65d6cfc9-s5g97
	258fd7ae93626       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   b59a04159a4fb       storage-provisioner
	bf77486744a30       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   565a0867a4a3a       kindnet-vz22j
	1c753c07805a4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   a2589ca7aa1a5       kube-proxy-pjpkh
	9fc3970511492       ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f     6 minutes ago       Running             kube-vip                  0                   229c492a7d447       kube-vip-ha-844661
	f06b75f1a2501       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   da4d3442917c5       etcd-ha-844661
	695ba2636aaa9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   45ce87c5b9a86       kube-scheduler-ha-844661
	d6c4df0798539       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   c3cdeb3fb2bc9       kube-apiserver-ha-844661
	9fc529f9c17c8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   8cfef6eeee31d       kube-controller-manager-ha-844661
	
	
	==> coredns [2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a] <==
	[INFO] 10.244.3.2:48122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001817736s
	[INFO] 10.244.1.2:41485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154354s
	[INFO] 10.244.0.4:48696 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00417262s
	[INFO] 10.244.0.4:39724 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011241203s
	[INFO] 10.244.0.4:33801 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201157s
	[INFO] 10.244.3.2:59342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205557s
	[INFO] 10.244.3.2:38358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000335352s
	[INFO] 10.244.3.2:50220 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290051s
	[INFO] 10.244.1.2:42991 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002076706s
	[INFO] 10.244.1.2:38070 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182659s
	[INFO] 10.244.1.2:38061 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120824s
	[INFO] 10.244.0.4:55480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107684s
	[INFO] 10.244.3.2:54459 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094155s
	[INFO] 10.244.3.2:56770 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159318s
	[INFO] 10.244.1.2:46930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145588s
	[INFO] 10.244.1.2:51686 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000234893s
	[INFO] 10.244.1.2:43604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089852s
	[INFO] 10.244.0.4:59908 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00031712s
	[INFO] 10.244.3.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016445s
	[INFO] 10.244.3.2:35219 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306046s
	[INFO] 10.244.3.2:45286 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00016761s
	[INFO] 10.244.1.2:48376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000282486s
	[INFO] 10.244.1.2:44477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097938s
	[INFO] 10.244.1.2:51521 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175252s
	[INFO] 10.244.1.2:42468 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076611s
	
	
	==> coredns [4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8] <==
	[INFO] 10.244.0.4:38561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176278s
	[INFO] 10.244.0.4:47328 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000239279s
	[INFO] 10.244.0.4:37188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002005s
	[INFO] 10.244.0.4:40443 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116158s
	[INFO] 10.244.0.4:39770 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000216794s
	[INFO] 10.244.3.2:58499 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947267s
	[INFO] 10.244.3.2:50696 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001435907s
	[INFO] 10.244.3.2:53598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101366s
	[INFO] 10.244.3.2:40278 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021319s
	[INFO] 10.244.3.2:35533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073855s
	[INFO] 10.244.1.2:57627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215883s
	[INFO] 10.244.1.2:58558 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015092s
	[INFO] 10.244.1.2:44310 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409552s
	[INFO] 10.244.1.2:44445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145932s
	[INFO] 10.244.1.2:53561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124269s
	[INFO] 10.244.0.4:42872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279983s
	[INFO] 10.244.0.4:56987 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127988s
	[INFO] 10.244.0.4:36230 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209676s
	[INFO] 10.244.3.2:59508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020584s
	[INFO] 10.244.3.2:54542 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160368s
	[INFO] 10.244.1.2:52317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136132s
	[INFO] 10.244.0.4:56988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179513s
	[INFO] 10.244.0.4:39632 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000244979s
	[INFO] 10.244.0.4:60960 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110854s
	[INFO] 10.244.3.2:58476 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000304046s
	
	
	==> describe nodes <==
	Name:               ha-844661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T18_03_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:03:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-844661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee44951a983a4e549dbb04cb8a2493c9
	  System UUID:                ee44951a-983a-4e54-9dbb-04cb8a2493c9
	  Boot ID:                    4c65764c-54aa-465a-bc8a-8a5365b789a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lzhpc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-4bdfz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 coredns-7c65d6cfc9-s5g97             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 etcd-ha-844661                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m21s
	  kube-system                 kindnet-vz22j                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-844661             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-844661    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-proxy-pjpkh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-844661             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-844661                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m15s  kube-proxy       
	  Normal  Starting                 6m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node ha-844661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s  kubelet          Node ha-844661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s  kubelet          Node ha-844661 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	  Normal  NodeReady                6m     kubelet          Node ha-844661 status is now: NodeReady
	  Normal  RegisteredNode           5m17s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	  Normal  RegisteredNode           4m5s   node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	
	
	Name:               ha-844661-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_04_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    ha-844661-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75eddb8895b44c028e3869c19333df27
	  System UUID:                75eddb88-95b4-4c02-8e38-69c19333df27
	  Boot ID:                    703a3f97-42af-45ac-b300-e4714fc82ae4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vkchm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-844661-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m23s
	  kube-system                 kindnet-q898d                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m25s
	  kube-system                 kube-apiserver-ha-844661-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-ha-844661-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-zsbfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-scheduler-ha-844661-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-vip-ha-844661-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m25s                  cidrAllocator    Node ha-844661-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-844661-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-844661-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-844661-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-844661-m02 status is now: NodeNotReady
	
	
	Name:               ha-844661-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_06_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:06:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    ha-844661-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eaab072d40e24724bda026ac82fdd308
	  System UUID:                eaab072d-40e2-4724-bda0-26ac82fdd308
	  Boot ID:                    db511fc0-c5d5-4348-8360-c6fc1b44808f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mwvv2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-844661-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m11s
	  kube-system                 kindnet-fzrh6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-844661-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-controller-manager-ha-844661-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-proxy-2mk9m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-844661-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-vip-ha-844661-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m13s                  cidrAllocator    Node ha-844661-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-844661-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-844661-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-844661-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	
	
	Name:               ha-844661-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_07_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-844661-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9adceb878ab74645bb56707a0ab9854e
	  System UUID:                9adceb87-8ab7-4645-bb56-707a0ab9854e
	  Boot ID:                    0b1794d4-8e9f-4a02-ba93-5010c0d8fbf7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7tcjz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m6s
	  kube-system                 kube-proxy-8bw6z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     3m6s                 cidrAllocator    Node ha-844661-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     3m6s                 cidrAllocator    Node ha-844661-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m6s)  kubelet          Node ha-844661-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m6s)  kubelet          Node ha-844661-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m6s)  kubelet          Node ha-844661-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  NodeReady                2m46s                kubelet          Node ha-844661-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 5 18:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051370] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036705] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.826003] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.830792] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.518259] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.512732] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.062769] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057746] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.181267] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.115768] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.273995] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.824232] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.167137] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.060834] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.275907] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.079756] kauditd_printk_skb: 79 callbacks suppressed
	[Nov 5 18:04] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.402917] kauditd_printk_skb: 32 callbacks suppressed
	[Nov 5 18:05] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc] <==
	{"level":"warn","ts":"2024-11-05T18:10:19.549292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.556130Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.559929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.571896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.579022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.585681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.588879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.592256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.604741Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.611536Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.618299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.623217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.626012Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.626797Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.633361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.639658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.646371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.650005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.654130Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.658941Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.665347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.680433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.725463Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.728489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:19.769333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:10:19 up 6 min,  0 users,  load average: 0.32, 0.42, 0.21
	Linux ha-844661 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf] <==
	I1105 18:09:48.981804       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:09:58.979695       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:09:58.979736       1 main.go:301] handling current node
	I1105 18:09:58.979751       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:09:58.979757       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:09:58.979941       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:09:58.979961       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:09:58.980047       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:09:58.980065       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	I1105 18:10:08.975320       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:10:08.975425       1 main.go:301] handling current node
	I1105 18:10:08.975448       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:10:08.975457       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:10:08.975728       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:10:08.975758       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:10:08.975910       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:10:08.975933       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	I1105 18:10:18.980134       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:10:18.980289       1 main.go:301] handling current node
	I1105 18:10:18.980325       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:10:18.980334       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:10:18.980658       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:10:18.980687       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:10:18.980836       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:10:18.980863       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f] <==
	W1105 18:03:56.787950       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.48]
	I1105 18:03:56.789794       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:03:56.795759       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:03:56.988233       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1105 18:03:58.574343       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1105 18:03:58.589042       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1105 18:03:58.611994       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1105 18:04:02.140726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1105 18:04:02.242563       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1105 18:06:39.847316       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39688: use of closed network connection
	E1105 18:06:40.021738       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39706: use of closed network connection
	E1105 18:06:40.204127       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39716: use of closed network connection
	E1105 18:06:40.398615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39728: use of closed network connection
	E1105 18:06:40.573865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39736: use of closed network connection
	E1105 18:06:40.752398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39760: use of closed network connection
	E1105 18:06:40.936783       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39766: use of closed network connection
	E1105 18:06:41.111519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39780: use of closed network connection
	E1105 18:06:41.286054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39802: use of closed network connection
	E1105 18:06:41.573950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39826: use of closed network connection
	E1105 18:06:41.738524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39836: use of closed network connection
	E1105 18:06:41.904845       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39854: use of closed network connection
	E1105 18:06:42.073866       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39862: use of closed network connection
	E1105 18:06:42.246567       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39868: use of closed network connection
	E1105 18:06:42.411961       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39894: use of closed network connection
	W1105 18:08:06.801135       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.48 192.168.39.52]
	
	
	==> kube-controller-manager [9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c] <==
	E1105 18:07:13.653435       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-844661-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-844661-m04"
	E1105 18:07:13.653555       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-844661-m04': failed to patch node CIDR: Node \"ha-844661-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1105 18:07:13.653638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:13.659637       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:13.797662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:14.149565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:14.559123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:16.780529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:16.780718       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-844661-m04"
	I1105 18:07:16.994375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:17.944364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:18.017747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:23.969145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:33.222978       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844661-m04"
	I1105 18:07:33.223667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:33.239449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:34.533989       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:44.277626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:08:29.557990       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844661-m04"
	I1105 18:08:29.558983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:29.585475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:29.697679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.853166ms"
	I1105 18:08:29.699962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.926µs"
	I1105 18:08:31.887524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:34.788426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	
	
	==> kube-proxy [1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:04:03.571824       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:04:03.590655       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E1105 18:04:03.590765       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:04:03.621086       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:04:03.621144       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:04:03.621208       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:04:03.623505       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:04:03.623772       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:04:03.623783       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:04:03.625873       1 config.go:199] "Starting service config controller"
	I1105 18:04:03.625922       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:04:03.625956       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:04:03.625972       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:04:03.628076       1 config.go:328] "Starting node config controller"
	I1105 18:04:03.628108       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:04:03.726043       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:04:03.726043       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:04:03.728252       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab] <==
	E1105 18:03:56.072125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.276682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 18:03:56.276737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.329770       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 18:03:56.329820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.398642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 18:03:56.398687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1105 18:03:57.639067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 18:06:35.211549       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9e352dc6-ed87-4112-85c5-a76c00a8912f" pod="default/busybox-7dff88458-vkchm" assumedNode="ha-844661-m02" currentNode="ha-844661-m03"
	E1105 18:06:35.223911       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vkchm\": pod busybox-7dff88458-vkchm is already assigned to node \"ha-844661-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vkchm" node="ha-844661-m03"
	E1105 18:06:35.226313       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9e352dc6-ed87-4112-85c5-a76c00a8912f(default/busybox-7dff88458-vkchm) was assumed on ha-844661-m03 but assigned to ha-844661-m02" pod="default/busybox-7dff88458-vkchm"
	E1105 18:06:35.226429       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vkchm\": pod busybox-7dff88458-vkchm is already assigned to node \"ha-844661-m02\"" pod="default/busybox-7dff88458-vkchm"
	I1105 18:06:35.226528       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vkchm" node="ha-844661-m02"
	E1105 18:06:35.274759       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lzhpc\": pod busybox-7dff88458-lzhpc is already assigned to node \"ha-844661\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lzhpc" node="ha-844661"
	E1105 18:06:35.275967       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8687b103-4a1a-4529-9efd-46405325fb04(default/busybox-7dff88458-lzhpc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lzhpc"
	E1105 18:06:35.276226       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lzhpc\": pod busybox-7dff88458-lzhpc is already assigned to node \"ha-844661\"" pod="default/busybox-7dff88458-lzhpc"
	I1105 18:06:35.276363       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lzhpc" node="ha-844661"
	E1105 18:07:13.665747       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tfzng\": pod kube-proxy-tfzng is already assigned to node \"ha-844661-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tfzng" node="ha-844661-m04"
	E1105 18:07:13.665825       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f52b30f-7446-45ac-bb36-73398ffbfbc2(kube-system/kube-proxy-tfzng) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tfzng"
	E1105 18:07:13.665842       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tfzng\": pod kube-proxy-tfzng is already assigned to node \"ha-844661-m04\"" pod="kube-system/kube-proxy-tfzng"
	I1105 18:07:13.665872       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tfzng" node="ha-844661-m04"
	E1105 18:07:13.666212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vjq6v\": pod kindnet-vjq6v is already assigned to node \"ha-844661-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vjq6v" node="ha-844661-m04"
	E1105 18:07:13.666376       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d9f2bfec-eb1f-4373-bf3a-414ed6c8a630(kube-system/kindnet-vjq6v) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vjq6v"
	E1105 18:07:13.666420       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vjq6v\": pod kindnet-vjq6v is already assigned to node \"ha-844661-m04\"" pod="kube-system/kindnet-vjq6v"
	I1105 18:07:13.666453       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vjq6v" node="ha-844661-m04"
	
	
	==> kubelet <==
	Nov 05 18:08:58 ha-844661 kubelet[1296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:08:58 ha-844661 kubelet[1296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:08:58 ha-844661 kubelet[1296]: E1105 18:08:58.595270    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830138594734384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:58 ha-844661 kubelet[1296]: E1105 18:08:58.595295    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830138594734384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:08 ha-844661 kubelet[1296]: E1105 18:09:08.597057    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830148596755320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:08 ha-844661 kubelet[1296]: E1105 18:09:08.597097    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830148596755320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:18 ha-844661 kubelet[1296]: E1105 18:09:18.599471    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830158599122023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:18 ha-844661 kubelet[1296]: E1105 18:09:18.599506    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830158599122023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:28 ha-844661 kubelet[1296]: E1105 18:09:28.601448    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830168600902243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:28 ha-844661 kubelet[1296]: E1105 18:09:28.601554    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830168600902243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:38 ha-844661 kubelet[1296]: E1105 18:09:38.606338    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830178605104359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:38 ha-844661 kubelet[1296]: E1105 18:09:38.606359    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830178605104359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:48 ha-844661 kubelet[1296]: E1105 18:09:48.608274    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830188607885225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:48 ha-844661 kubelet[1296]: E1105 18:09:48.608666    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830188607885225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.519242    1296 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:09:58 ha-844661 kubelet[1296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.611279    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830198610818845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.611302    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830198610818845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:08 ha-844661 kubelet[1296]: E1105 18:10:08.613551    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830208612853413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:08 ha-844661 kubelet[1296]: E1105 18:10:08.613956    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830208612853413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:18 ha-844661 kubelet[1296]: E1105 18:10:18.616403    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830218615829286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:18 ha-844661 kubelet[1296]: E1105 18:10:18.616436    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830218615829286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844661 -n ha-844661
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr: (4.016644513s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844661 -n ha-844661
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 logs -n 25: (1.39531847s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m03_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m04 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp testdata/cp-test.txt                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m04_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03:/home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m03 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-844661 node stop m02 -v=7                                                     | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-844661 node start m02 -v=7                                                    | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:03:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:03:20.652608   27131 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:03:20.652749   27131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:03:20.652760   27131 out.go:358] Setting ErrFile to fd 2...
	I1105 18:03:20.652767   27131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:03:20.652948   27131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:03:20.653500   27131 out.go:352] Setting JSON to false
	I1105 18:03:20.654349   27131 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2743,"bootTime":1730827058,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:03:20.654437   27131 start.go:139] virtualization: kvm guest
	I1105 18:03:20.656534   27131 out.go:177] * [ha-844661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:03:20.657972   27131 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:03:20.658005   27131 notify.go:220] Checking for updates...
	I1105 18:03:20.660463   27131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:03:20.661864   27131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:03:20.663111   27131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:20.664367   27131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:03:20.665603   27131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:03:20.666934   27131 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:03:20.701089   27131 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 18:03:20.702358   27131 start.go:297] selected driver: kvm2
	I1105 18:03:20.702375   27131 start.go:901] validating driver "kvm2" against <nil>
	I1105 18:03:20.702385   27131 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:03:20.703116   27131 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:03:20.703189   27131 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:03:20.718290   27131 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:03:20.718330   27131 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 18:03:20.718556   27131 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:03:20.718584   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:20.718622   27131 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1105 18:03:20.718632   27131 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 18:03:20.718676   27131 start.go:340] cluster config:
	{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1105 18:03:20.718795   27131 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:03:20.720599   27131 out.go:177] * Starting "ha-844661" primary control-plane node in "ha-844661" cluster
	I1105 18:03:20.721815   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:03:20.721849   27131 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:03:20.721872   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:03:20.721982   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:03:20.721996   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:03:20.722409   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:03:20.722435   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json: {Name:mkaefcdd76905e10868a2bf21132faf3026da59d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:20.722574   27131 start.go:360] acquireMachinesLock for ha-844661: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:03:20.722613   27131 start.go:364] duration metric: took 21.652µs to acquireMachinesLock for "ha-844661"
	I1105 18:03:20.722627   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:03:20.722690   27131 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 18:03:20.724172   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:03:20.724279   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:03:20.724320   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:03:20.738289   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I1105 18:03:20.738756   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:03:20.739283   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:03:20.739302   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:03:20.739702   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:03:20.739881   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:20.740007   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:20.740175   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:03:20.740205   27131 client.go:168] LocalClient.Create starting
	I1105 18:03:20.740238   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:03:20.740272   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:03:20.740288   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:03:20.740341   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:03:20.740359   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:03:20.740374   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:03:20.740388   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:03:20.740400   27131 main.go:141] libmachine: (ha-844661) Calling .PreCreateCheck
	I1105 18:03:20.740713   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:20.741068   27131 main.go:141] libmachine: Creating machine...
	I1105 18:03:20.741080   27131 main.go:141] libmachine: (ha-844661) Calling .Create
	I1105 18:03:20.741210   27131 main.go:141] libmachine: (ha-844661) Creating KVM machine...
	I1105 18:03:20.742313   27131 main.go:141] libmachine: (ha-844661) DBG | found existing default KVM network
	I1105 18:03:20.742933   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:20.742806   27154 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1105 18:03:20.742963   27131 main.go:141] libmachine: (ha-844661) DBG | created network xml: 
	I1105 18:03:20.742994   27131 main.go:141] libmachine: (ha-844661) DBG | <network>
	I1105 18:03:20.743008   27131 main.go:141] libmachine: (ha-844661) DBG |   <name>mk-ha-844661</name>
	I1105 18:03:20.743015   27131 main.go:141] libmachine: (ha-844661) DBG |   <dns enable='no'/>
	I1105 18:03:20.743024   27131 main.go:141] libmachine: (ha-844661) DBG |   
	I1105 18:03:20.743029   27131 main.go:141] libmachine: (ha-844661) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1105 18:03:20.743036   27131 main.go:141] libmachine: (ha-844661) DBG |     <dhcp>
	I1105 18:03:20.743041   27131 main.go:141] libmachine: (ha-844661) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1105 18:03:20.743049   27131 main.go:141] libmachine: (ha-844661) DBG |     </dhcp>
	I1105 18:03:20.743053   27131 main.go:141] libmachine: (ha-844661) DBG |   </ip>
	I1105 18:03:20.743060   27131 main.go:141] libmachine: (ha-844661) DBG |   
	I1105 18:03:20.743066   27131 main.go:141] libmachine: (ha-844661) DBG | </network>
	I1105 18:03:20.743074   27131 main.go:141] libmachine: (ha-844661) DBG | 
	I1105 18:03:20.748364   27131 main.go:141] libmachine: (ha-844661) DBG | trying to create private KVM network mk-ha-844661 192.168.39.0/24...
	I1105 18:03:20.811114   27131 main.go:141] libmachine: (ha-844661) DBG | private KVM network mk-ha-844661 192.168.39.0/24 created
	I1105 18:03:20.811141   27131 main.go:141] libmachine: (ha-844661) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 ...
	I1105 18:03:20.811159   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:20.811087   27154 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:20.811177   27131 main.go:141] libmachine: (ha-844661) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:03:20.811237   27131 main.go:141] libmachine: (ha-844661) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:03:21.057798   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.057650   27154 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa...
	I1105 18:03:21.226724   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.226590   27154 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/ha-844661.rawdisk...
	I1105 18:03:21.226750   27131 main.go:141] libmachine: (ha-844661) DBG | Writing magic tar header
	I1105 18:03:21.226760   27131 main.go:141] libmachine: (ha-844661) DBG | Writing SSH key tar header
	I1105 18:03:21.226768   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.226707   27154 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 ...
	I1105 18:03:21.226781   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661
	I1105 18:03:21.226859   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 (perms=drwx------)
	I1105 18:03:21.226880   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:03:21.226887   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:03:21.226897   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:21.226904   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:03:21.226909   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:03:21.226916   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:03:21.226920   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:03:21.226927   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home
	I1105 18:03:21.226932   27131 main.go:141] libmachine: (ha-844661) DBG | Skipping /home - not owner
	I1105 18:03:21.226941   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:03:21.226950   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:03:21.226957   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:03:21.226962   27131 main.go:141] libmachine: (ha-844661) Creating domain...
	I1105 18:03:21.228177   27131 main.go:141] libmachine: (ha-844661) define libvirt domain using xml: 
	I1105 18:03:21.228198   27131 main.go:141] libmachine: (ha-844661) <domain type='kvm'>
	I1105 18:03:21.228204   27131 main.go:141] libmachine: (ha-844661)   <name>ha-844661</name>
	I1105 18:03:21.228209   27131 main.go:141] libmachine: (ha-844661)   <memory unit='MiB'>2200</memory>
	I1105 18:03:21.228214   27131 main.go:141] libmachine: (ha-844661)   <vcpu>2</vcpu>
	I1105 18:03:21.228218   27131 main.go:141] libmachine: (ha-844661)   <features>
	I1105 18:03:21.228223   27131 main.go:141] libmachine: (ha-844661)     <acpi/>
	I1105 18:03:21.228228   27131 main.go:141] libmachine: (ha-844661)     <apic/>
	I1105 18:03:21.228233   27131 main.go:141] libmachine: (ha-844661)     <pae/>
	I1105 18:03:21.228241   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228249   27131 main.go:141] libmachine: (ha-844661)   </features>
	I1105 18:03:21.228254   27131 main.go:141] libmachine: (ha-844661)   <cpu mode='host-passthrough'>
	I1105 18:03:21.228261   27131 main.go:141] libmachine: (ha-844661)   
	I1105 18:03:21.228268   27131 main.go:141] libmachine: (ha-844661)   </cpu>
	I1105 18:03:21.228298   27131 main.go:141] libmachine: (ha-844661)   <os>
	I1105 18:03:21.228318   27131 main.go:141] libmachine: (ha-844661)     <type>hvm</type>
	I1105 18:03:21.228325   27131 main.go:141] libmachine: (ha-844661)     <boot dev='cdrom'/>
	I1105 18:03:21.228329   27131 main.go:141] libmachine: (ha-844661)     <boot dev='hd'/>
	I1105 18:03:21.228355   27131 main.go:141] libmachine: (ha-844661)     <bootmenu enable='no'/>
	I1105 18:03:21.228375   27131 main.go:141] libmachine: (ha-844661)   </os>
	I1105 18:03:21.228385   27131 main.go:141] libmachine: (ha-844661)   <devices>
	I1105 18:03:21.228403   27131 main.go:141] libmachine: (ha-844661)     <disk type='file' device='cdrom'>
	I1105 18:03:21.228418   27131 main.go:141] libmachine: (ha-844661)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/boot2docker.iso'/>
	I1105 18:03:21.228429   27131 main.go:141] libmachine: (ha-844661)       <target dev='hdc' bus='scsi'/>
	I1105 18:03:21.228437   27131 main.go:141] libmachine: (ha-844661)       <readonly/>
	I1105 18:03:21.228450   27131 main.go:141] libmachine: (ha-844661)     </disk>
	I1105 18:03:21.228462   27131 main.go:141] libmachine: (ha-844661)     <disk type='file' device='disk'>
	I1105 18:03:21.228474   27131 main.go:141] libmachine: (ha-844661)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:03:21.228488   27131 main.go:141] libmachine: (ha-844661)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/ha-844661.rawdisk'/>
	I1105 18:03:21.228497   27131 main.go:141] libmachine: (ha-844661)       <target dev='hda' bus='virtio'/>
	I1105 18:03:21.228502   27131 main.go:141] libmachine: (ha-844661)     </disk>
	I1105 18:03:21.228511   27131 main.go:141] libmachine: (ha-844661)     <interface type='network'>
	I1105 18:03:21.228519   27131 main.go:141] libmachine: (ha-844661)       <source network='mk-ha-844661'/>
	I1105 18:03:21.228532   27131 main.go:141] libmachine: (ha-844661)       <model type='virtio'/>
	I1105 18:03:21.228539   27131 main.go:141] libmachine: (ha-844661)     </interface>
	I1105 18:03:21.228551   27131 main.go:141] libmachine: (ha-844661)     <interface type='network'>
	I1105 18:03:21.228560   27131 main.go:141] libmachine: (ha-844661)       <source network='default'/>
	I1105 18:03:21.228570   27131 main.go:141] libmachine: (ha-844661)       <model type='virtio'/>
	I1105 18:03:21.228579   27131 main.go:141] libmachine: (ha-844661)     </interface>
	I1105 18:03:21.228587   27131 main.go:141] libmachine: (ha-844661)     <serial type='pty'>
	I1105 18:03:21.228599   27131 main.go:141] libmachine: (ha-844661)       <target port='0'/>
	I1105 18:03:21.228607   27131 main.go:141] libmachine: (ha-844661)     </serial>
	I1105 18:03:21.228613   27131 main.go:141] libmachine: (ha-844661)     <console type='pty'>
	I1105 18:03:21.228629   27131 main.go:141] libmachine: (ha-844661)       <target type='serial' port='0'/>
	I1105 18:03:21.228642   27131 main.go:141] libmachine: (ha-844661)     </console>
	I1105 18:03:21.228653   27131 main.go:141] libmachine: (ha-844661)     <rng model='virtio'>
	I1105 18:03:21.228670   27131 main.go:141] libmachine: (ha-844661)       <backend model='random'>/dev/random</backend>
	I1105 18:03:21.228679   27131 main.go:141] libmachine: (ha-844661)     </rng>
	I1105 18:03:21.228687   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228694   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228699   27131 main.go:141] libmachine: (ha-844661)   </devices>
	I1105 18:03:21.228707   27131 main.go:141] libmachine: (ha-844661) </domain>
	I1105 18:03:21.228717   27131 main.go:141] libmachine: (ha-844661) 
	I1105 18:03:21.232718   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:b2:92:26 in network default
	I1105 18:03:21.233193   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:21.233215   27131 main.go:141] libmachine: (ha-844661) Ensuring networks are active...
	I1105 18:03:21.233765   27131 main.go:141] libmachine: (ha-844661) Ensuring network default is active
	I1105 18:03:21.234017   27131 main.go:141] libmachine: (ha-844661) Ensuring network mk-ha-844661 is active
	I1105 18:03:21.234455   27131 main.go:141] libmachine: (ha-844661) Getting domain xml...
	I1105 18:03:21.235089   27131 main.go:141] libmachine: (ha-844661) Creating domain...
	I1105 18:03:22.412574   27131 main.go:141] libmachine: (ha-844661) Waiting to get IP...
	I1105 18:03:22.413266   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:22.413608   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:22.413630   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:22.413577   27154 retry.go:31] will retry after 279.954438ms: waiting for machine to come up
	I1105 18:03:22.695059   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:22.695483   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:22.695511   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:22.695451   27154 retry.go:31] will retry after 304.898477ms: waiting for machine to come up
	I1105 18:03:23.001972   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.002322   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.002343   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.002303   27154 retry.go:31] will retry after 443.493793ms: waiting for machine to come up
	I1105 18:03:23.446683   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.447042   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.447069   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.446999   27154 retry.go:31] will retry after 509.391538ms: waiting for machine to come up
	I1105 18:03:23.957539   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.957900   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.957927   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.957847   27154 retry.go:31] will retry after 602.880889ms: waiting for machine to come up
	I1105 18:03:24.562659   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:24.563119   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:24.563144   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:24.563076   27154 retry.go:31] will retry after 741.734368ms: waiting for machine to come up
	I1105 18:03:25.306116   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:25.306633   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:25.306663   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:25.306587   27154 retry.go:31] will retry after 1.015957471s: waiting for machine to come up
	I1105 18:03:26.324342   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:26.324731   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:26.324755   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:26.324683   27154 retry.go:31] will retry after 1.378698886s: waiting for machine to come up
	I1105 18:03:27.705172   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:27.705551   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:27.705575   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:27.705506   27154 retry.go:31] will retry after 1.576136067s: waiting for machine to come up
	I1105 18:03:29.283960   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:29.284380   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:29.284417   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:29.284337   27154 retry.go:31] will retry after 2.253581174s: waiting for machine to come up
	I1105 18:03:31.539436   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:31.539830   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:31.539860   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:31.539773   27154 retry.go:31] will retry after 1.761371484s: waiting for machine to come up
	I1105 18:03:33.303719   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:33.304166   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:33.304190   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:33.304128   27154 retry.go:31] will retry after 2.85080226s: waiting for machine to come up
	I1105 18:03:36.156486   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:36.156898   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:36.156920   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:36.156851   27154 retry.go:31] will retry after 4.320693691s: waiting for machine to come up
	I1105 18:03:40.482276   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.482645   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has current primary IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.482666   27131 main.go:141] libmachine: (ha-844661) Found IP for machine: 192.168.39.48
	I1105 18:03:40.482731   27131 main.go:141] libmachine: (ha-844661) Reserving static IP address...
	I1105 18:03:40.483186   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find host DHCP lease matching {name: "ha-844661", mac: "52:54:00:ba:57:dd", ip: "192.168.39.48"} in network mk-ha-844661
	I1105 18:03:40.553039   27131 main.go:141] libmachine: (ha-844661) DBG | Getting to WaitForSSH function...
	I1105 18:03:40.553065   27131 main.go:141] libmachine: (ha-844661) Reserved static IP address: 192.168.39.48
	I1105 18:03:40.553074   27131 main.go:141] libmachine: (ha-844661) Waiting for SSH to be available...
	I1105 18:03:40.555541   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.555889   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.555921   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.556076   27131 main.go:141] libmachine: (ha-844661) DBG | Using SSH client type: external
	I1105 18:03:40.556099   27131 main.go:141] libmachine: (ha-844661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa (-rw-------)
	I1105 18:03:40.556130   27131 main.go:141] libmachine: (ha-844661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:03:40.556164   27131 main.go:141] libmachine: (ha-844661) DBG | About to run SSH command:
	I1105 18:03:40.556196   27131 main.go:141] libmachine: (ha-844661) DBG | exit 0
	I1105 18:03:40.678881   27131 main.go:141] libmachine: (ha-844661) DBG | SSH cmd err, output: <nil>: 
	I1105 18:03:40.679168   27131 main.go:141] libmachine: (ha-844661) KVM machine creation complete!
	I1105 18:03:40.679431   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:40.680021   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:40.680197   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:40.680362   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:03:40.680377   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:03:40.681549   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:03:40.681565   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:03:40.681581   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:03:40.681589   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.683878   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.684197   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.684222   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.684354   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.684522   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.684666   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.684789   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.684936   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.685164   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.685176   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:03:40.782106   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:03:40.782126   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:03:40.782134   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.785142   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.785540   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.785569   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.785664   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.785868   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.786031   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.786159   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.786354   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.786515   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.786526   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:03:40.883619   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:03:40.883676   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:03:40.883682   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:03:40.883690   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:40.883923   27131 buildroot.go:166] provisioning hostname "ha-844661"
	I1105 18:03:40.883949   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:40.884120   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.886507   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.886833   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.886857   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.886980   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.887151   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.887291   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.887396   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.887549   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.887741   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.887756   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661 && echo "ha-844661" | sudo tee /etc/hostname
	I1105 18:03:41.000392   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661
	
	I1105 18:03:41.000420   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.003294   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.003567   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.003608   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.003744   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.003933   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.004103   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.004242   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.004353   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.004531   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.004545   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:03:41.111348   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:03:41.111383   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:03:41.111432   27131 buildroot.go:174] setting up certificates
	I1105 18:03:41.111449   27131 provision.go:84] configureAuth start
	I1105 18:03:41.111460   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:41.111736   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.114450   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.114812   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.114841   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.114944   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.117124   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.117436   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.117462   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.117573   27131 provision.go:143] copyHostCerts
	I1105 18:03:41.117613   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:03:41.117655   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:03:41.117671   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:03:41.117771   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:03:41.117875   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:03:41.117903   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:03:41.117913   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:03:41.117953   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:03:41.118004   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:03:41.118021   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:03:41.118027   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:03:41.118050   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:03:41.118095   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661 san=[127.0.0.1 192.168.39.48 ha-844661 localhost minikube]
	I1105 18:03:41.208702   27131 provision.go:177] copyRemoteCerts
	I1105 18:03:41.208760   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:03:41.208783   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.211467   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.211827   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.211850   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.212052   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.212204   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.212341   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.212443   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.296812   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:03:41.296897   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:03:41.319712   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:03:41.319772   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:03:41.342415   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:03:41.342483   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1105 18:03:41.365050   27131 provision.go:87] duration metric: took 253.585291ms to configureAuth
	I1105 18:03:41.365082   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:03:41.365296   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:03:41.365378   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.368515   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.368840   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.368869   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.369025   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.369189   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.369363   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.369489   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.369646   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.369808   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.369821   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:03:41.576635   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:03:41.576666   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:03:41.576676   27131 main.go:141] libmachine: (ha-844661) Calling .GetURL
	I1105 18:03:41.577929   27131 main.go:141] libmachine: (ha-844661) DBG | Using libvirt version 6000000
	I1105 18:03:41.580297   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.580615   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.580654   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.580760   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:03:41.580772   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:03:41.580778   27131 client.go:171] duration metric: took 20.840565211s to LocalClient.Create
	I1105 18:03:41.580795   27131 start.go:167] duration metric: took 20.84062429s to libmachine.API.Create "ha-844661"
	I1105 18:03:41.580805   27131 start.go:293] postStartSetup for "ha-844661" (driver="kvm2")
	I1105 18:03:41.580814   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:03:41.580829   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.581046   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:03:41.581068   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.583124   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.583501   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.583522   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.583601   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.583779   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.583943   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.584110   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.661161   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:03:41.665033   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:03:41.665062   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:03:41.665127   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:03:41.665231   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:03:41.665252   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:03:41.665373   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:03:41.674466   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:03:41.696494   27131 start.go:296] duration metric: took 115.67878ms for postStartSetup
	I1105 18:03:41.696542   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:41.697138   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.699655   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.699984   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.700009   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.700292   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:03:41.700505   27131 start.go:128] duration metric: took 20.977803727s to createHost
	I1105 18:03:41.700531   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.702386   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.702601   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.702627   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.702711   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.702863   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.703005   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.703106   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.703251   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.703451   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.703464   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:03:41.803411   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829821.777547713
	
	I1105 18:03:41.803432   27131 fix.go:216] guest clock: 1730829821.777547713
	I1105 18:03:41.803441   27131 fix.go:229] Guest: 2024-11-05 18:03:41.777547713 +0000 UTC Remote: 2024-11-05 18:03:41.700519186 +0000 UTC m=+21.085212205 (delta=77.028527ms)
	I1105 18:03:41.803466   27131 fix.go:200] guest clock delta is within tolerance: 77.028527ms
	I1105 18:03:41.803472   27131 start.go:83] releasing machines lock for "ha-844661", held for 21.080851922s
	I1105 18:03:41.803504   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.803818   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.806212   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.806544   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.806574   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.806731   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807182   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807323   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807421   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:03:41.807458   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.807478   27131 ssh_runner.go:195] Run: cat /version.json
	I1105 18:03:41.807503   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.809937   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810070   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810265   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.810291   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810383   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.810476   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.810506   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810517   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.810650   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.810655   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.810815   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.810809   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.810922   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.811058   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.883551   27131 ssh_runner.go:195] Run: systemctl --version
	I1105 18:03:41.923044   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:03:42.072766   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:03:42.079007   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:03:42.079076   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:03:42.094820   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:03:42.094844   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:03:42.094917   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:03:42.118583   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:03:42.138115   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:03:42.138172   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:03:42.152440   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:03:42.166344   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:03:42.279937   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:03:42.434792   27131 docker.go:233] disabling docker service ...
	I1105 18:03:42.434953   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:03:42.449109   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:03:42.461551   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:03:42.578145   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:03:42.699091   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:03:42.712758   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:03:42.730751   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:03:42.730837   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.741264   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:03:42.741334   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.751371   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.761461   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.771733   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:03:42.782235   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.792151   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.809625   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.820631   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:03:42.829567   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:03:42.829657   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:03:42.841074   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:03:42.849804   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:03:42.970294   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:03:43.072129   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:03:43.072202   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:03:43.076505   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:03:43.076553   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:03:43.079876   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:03:43.118292   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:03:43.118368   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:03:43.145365   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:03:43.174475   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:03:43.175688   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:43.178118   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:43.178392   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:43.178429   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:43.178616   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:03:43.182299   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:03:43.194156   27131 kubeadm.go:883] updating cluster {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:03:43.194286   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:03:43.194326   27131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:03:43.224139   27131 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 18:03:43.224200   27131 ssh_runner.go:195] Run: which lz4
	I1105 18:03:43.227717   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1105 18:03:43.227803   27131 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 18:03:43.231367   27131 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 18:03:43.231394   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 18:03:44.421241   27131 crio.go:462] duration metric: took 1.193460189s to copy over tarball
	I1105 18:03:44.421309   27131 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 18:03:46.448289   27131 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.026951778s)
	I1105 18:03:46.448321   27131 crio.go:469] duration metric: took 2.027054899s to extract the tarball
	I1105 18:03:46.448331   27131 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 18:03:46.484203   27131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:03:46.526703   27131 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:03:46.526728   27131 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:03:46.526737   27131 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.2 crio true true} ...
	I1105 18:03:46.526839   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:03:46.526923   27131 ssh_runner.go:195] Run: crio config
	I1105 18:03:46.568508   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:46.568526   27131 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 18:03:46.568535   27131 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:03:46.568555   27131 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844661 NodeName:ha-844661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:03:46.568670   27131 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.48"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:03:46.568726   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:03:46.568770   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:03:46.584044   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:03:46.584179   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:03:46.584237   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:03:46.593564   27131 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:03:46.593616   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 18:03:46.602413   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1105 18:03:46.618161   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:03:46.634586   27131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1105 18:03:46.650181   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1105 18:03:46.665377   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:03:46.668925   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:03:46.679986   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:03:46.788039   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:03:46.803466   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.48
	I1105 18:03:46.803487   27131 certs.go:194] generating shared ca certs ...
	I1105 18:03:46.803503   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.803661   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:03:46.803717   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:03:46.803731   27131 certs.go:256] generating profile certs ...
	I1105 18:03:46.803788   27131 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:03:46.803806   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt with IP's: []
	I1105 18:03:46.868048   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt ...
	I1105 18:03:46.868073   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt: {Name:mk1b1384fd11cca80823d77e811ce40ed13a39a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.868260   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key ...
	I1105 18:03:46.868273   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key: {Name:mk63b8cd2995063e8f249e25659d0d581c1c609d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.868372   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a
	I1105 18:03:46.868394   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.254]
	I1105 18:03:47.168393   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a ...
	I1105 18:03:47.168422   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a: {Name:mkfb181b3090bd8c3e2b4c01d3e8bebb9949241a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.168598   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a ...
	I1105 18:03:47.168612   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a: {Name:mk8ee51e070e9f8f3516c15edb86d588cc060b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.168716   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:03:47.168827   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:03:47.168910   27131 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:03:47.168929   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt with IP's: []
	I1105 18:03:47.272330   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt ...
	I1105 18:03:47.272363   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt: {Name:mkef37902a8eaa82f4513587418829011c41aa9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.272551   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key ...
	I1105 18:03:47.272567   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key: {Name:mka47632f74c8924a4575ad6d317d9db035f5aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.272701   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:03:47.272727   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:03:47.272746   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:03:47.272764   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:03:47.272788   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:03:47.272803   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:03:47.272820   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:03:47.272860   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:03:47.272935   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:03:47.272983   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:03:47.272995   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:03:47.273029   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:03:47.273061   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:03:47.273095   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:03:47.273147   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:03:47.273189   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.273209   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.273227   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.273815   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:03:47.298487   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:03:47.321311   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:03:47.343337   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:03:47.365041   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 18:03:47.387466   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:03:47.409231   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:03:47.430651   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:03:47.452212   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:03:47.474137   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:03:47.495806   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:03:47.517223   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:03:47.532167   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:03:47.537576   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:03:47.549952   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.556864   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.556922   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.564072   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:03:47.575807   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:03:47.588714   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.593382   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.593445   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.601274   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:03:47.613497   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:03:47.623268   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.627461   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.627512   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.632828   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:03:47.642821   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:03:47.646365   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:03:47.646411   27131 kubeadm.go:392] StartCluster: {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:03:47.646477   27131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:03:47.646544   27131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:03:47.682117   27131 cri.go:89] found id: ""
	I1105 18:03:47.682186   27131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:03:47.691260   27131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 18:03:47.700258   27131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:03:47.708885   27131 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:03:47.708907   27131 kubeadm.go:157] found existing configuration files:
	
	I1105 18:03:47.708950   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:03:47.717439   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:03:47.717497   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:03:47.726246   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:03:47.734558   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:03:47.734611   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:03:47.743183   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:03:47.751387   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:03:47.751433   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:03:47.760203   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:03:47.768178   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:03:47.768234   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:03:47.776770   27131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 18:03:47.967353   27131 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 18:03:59.183523   27131 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 18:03:59.183604   27131 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 18:03:59.183699   27131 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 18:03:59.183848   27131 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 18:03:59.183952   27131 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 18:03:59.184008   27131 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 18:03:59.185602   27131 out.go:235]   - Generating certificates and keys ...
	I1105 18:03:59.185696   27131 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 18:03:59.185773   27131 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 18:03:59.185856   27131 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 18:03:59.185912   27131 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 18:03:59.185997   27131 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 18:03:59.186086   27131 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 18:03:59.186173   27131 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 18:03:59.186341   27131 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-844661 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1105 18:03:59.186418   27131 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 18:03:59.186574   27131 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-844661 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1105 18:03:59.186680   27131 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 18:03:59.186753   27131 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 18:03:59.186826   27131 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 18:03:59.186915   27131 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 18:03:59.187003   27131 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 18:03:59.187068   27131 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 18:03:59.187122   27131 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 18:03:59.187247   27131 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 18:03:59.187350   27131 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 18:03:59.187464   27131 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 18:03:59.187595   27131 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 18:03:59.189162   27131 out.go:235]   - Booting up control plane ...
	I1105 18:03:59.189263   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 18:03:59.189330   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 18:03:59.189411   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 18:03:59.189560   27131 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 18:03:59.189674   27131 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 18:03:59.189732   27131 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 18:03:59.189870   27131 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 18:03:59.190000   27131 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 18:03:59.190063   27131 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.0020676s
	I1105 18:03:59.190152   27131 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 18:03:59.190232   27131 kubeadm.go:310] [api-check] The API server is healthy after 5.797330373s
	I1105 18:03:59.190371   27131 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 18:03:59.190545   27131 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 18:03:59.190621   27131 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 18:03:59.190819   27131 kubeadm.go:310] [mark-control-plane] Marking the node ha-844661 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 18:03:59.190908   27131 kubeadm.go:310] [bootstrap-token] Using token: 87pfeh.t954ki35wy37ojkf
	I1105 18:03:59.192164   27131 out.go:235]   - Configuring RBAC rules ...
	I1105 18:03:59.192251   27131 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 18:03:59.192336   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 18:03:59.192519   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 18:03:59.192749   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 18:03:59.192914   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 18:03:59.193036   27131 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 18:03:59.193159   27131 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 18:03:59.193205   27131 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 18:03:59.193263   27131 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 18:03:59.193287   27131 kubeadm.go:310] 
	I1105 18:03:59.193351   27131 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 18:03:59.193361   27131 kubeadm.go:310] 
	I1105 18:03:59.193483   27131 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 18:03:59.193498   27131 kubeadm.go:310] 
	I1105 18:03:59.193525   27131 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 18:03:59.193576   27131 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 18:03:59.193636   27131 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 18:03:59.193642   27131 kubeadm.go:310] 
	I1105 18:03:59.193690   27131 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 18:03:59.193695   27131 kubeadm.go:310] 
	I1105 18:03:59.193734   27131 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 18:03:59.193739   27131 kubeadm.go:310] 
	I1105 18:03:59.193790   27131 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 18:03:59.193854   27131 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 18:03:59.193915   27131 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 18:03:59.193921   27131 kubeadm.go:310] 
	I1105 18:03:59.193994   27131 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 18:03:59.194085   27131 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 18:03:59.194112   27131 kubeadm.go:310] 
	I1105 18:03:59.194272   27131 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 87pfeh.t954ki35wy37ojkf \
	I1105 18:03:59.194366   27131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 18:03:59.194391   27131 kubeadm.go:310] 	--control-plane 
	I1105 18:03:59.194397   27131 kubeadm.go:310] 
	I1105 18:03:59.194470   27131 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 18:03:59.194483   27131 kubeadm.go:310] 
	I1105 18:03:59.194599   27131 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 87pfeh.t954ki35wy37ojkf \
	I1105 18:03:59.194713   27131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 18:03:59.194723   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:59.194729   27131 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 18:03:59.196416   27131 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1105 18:03:59.198072   27131 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1105 18:03:59.203679   27131 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 18:03:59.203699   27131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1105 18:03:59.220864   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 18:03:59.577751   27131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 18:03:59.577851   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:03:59.577925   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661 minikube.k8s.io/updated_at=2024_11_05T18_03_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=true
	I1105 18:03:59.773949   27131 ops.go:34] apiserver oom_adj: -16
	I1105 18:03:59.774061   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:00.274452   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:00.774925   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:01.274873   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:01.774746   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:02.274653   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:02.410257   27131 kubeadm.go:1113] duration metric: took 2.832479659s to wait for elevateKubeSystemPrivileges
	I1105 18:04:02.410297   27131 kubeadm.go:394] duration metric: took 14.763886485s to StartCluster
	I1105 18:04:02.410318   27131 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:02.410399   27131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:02.411281   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:02.411532   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 18:04:02.411550   27131 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:02.411572   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:04:02.411587   27131 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 18:04:02.411670   27131 addons.go:69] Setting storage-provisioner=true in profile "ha-844661"
	I1105 18:04:02.411690   27131 addons.go:234] Setting addon storage-provisioner=true in "ha-844661"
	I1105 18:04:02.411709   27131 addons.go:69] Setting default-storageclass=true in profile "ha-844661"
	I1105 18:04:02.411717   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:02.411726   27131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-844661"
	I1105 18:04:02.411747   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:02.412164   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.412164   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.412207   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.412212   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.427238   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I1105 18:04:02.427311   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I1105 18:04:02.427732   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.427772   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.428176   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.428198   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.428276   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.428292   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.428565   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.428588   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.428730   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.429124   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.429169   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.430653   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:02.430886   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 18:04:02.431352   27131 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 18:04:02.431554   27131 addons.go:234] Setting addon default-storageclass=true in "ha-844661"
	I1105 18:04:02.431592   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:02.431879   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.431911   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.444788   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1105 18:04:02.445225   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.445776   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.445800   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.446109   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.446308   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.446715   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I1105 18:04:02.447172   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.447626   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.447652   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.447978   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.447989   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:02.448526   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.448566   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.450053   27131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:04:02.451430   27131 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:04:02.451447   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 18:04:02.451465   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:02.453936   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.454325   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:02.454352   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.454596   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:02.454747   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:02.454895   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:02.455039   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:02.463344   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I1105 18:04:02.463824   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.464272   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.464295   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.464580   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.464736   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.466150   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:02.466325   27131 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 18:04:02.466346   27131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 18:04:02.466366   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:02.468861   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.469292   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:02.469320   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.469478   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:02.469641   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:02.469795   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:02.469919   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:02.559386   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 18:04:02.582601   27131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:04:02.634107   27131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 18:04:03.029603   27131 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1105 18:04:03.212900   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.212938   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.212957   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213012   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213238   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213254   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213263   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.213301   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213309   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213317   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213327   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.213335   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213567   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.213576   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.213601   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213608   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213606   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213626   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213684   27131 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 18:04:03.213697   27131 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 18:04:03.213833   27131 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1105 18:04:03.213847   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:03.213858   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:03.213863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:03.230734   27131 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1105 18:04:03.231584   27131 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1105 18:04:03.231606   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:03.231617   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:03.231624   27131 round_trippers.go:473]     Content-Type: application/json
	I1105 18:04:03.231628   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:03.238223   27131 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:04:03.238372   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.238386   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.238717   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.238773   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.238806   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.241254   27131 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1105 18:04:03.242442   27131 addons.go:510] duration metric: took 830.859112ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1105 18:04:03.242476   27131 start.go:246] waiting for cluster config update ...
	I1105 18:04:03.242491   27131 start.go:255] writing updated cluster config ...
	I1105 18:04:03.244187   27131 out.go:201] 
	I1105 18:04:03.246027   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:03.246146   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:03.247790   27131 out.go:177] * Starting "ha-844661-m02" control-plane node in "ha-844661" cluster
	I1105 18:04:03.248926   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:04:03.248959   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:04:03.249079   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:04:03.249097   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:04:03.249198   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:03.249437   27131 start.go:360] acquireMachinesLock for ha-844661-m02: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:04:03.249497   27131 start.go:364] duration metric: took 35.772µs to acquireMachinesLock for "ha-844661-m02"
	I1105 18:04:03.249518   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:03.249605   27131 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1105 18:04:03.251175   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:04:03.251287   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:03.251335   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:03.267010   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I1105 18:04:03.267624   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:03.268242   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:03.268268   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:03.268591   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:03.268765   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:03.268983   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:03.269146   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:04:03.269172   27131 client.go:168] LocalClient.Create starting
	I1105 18:04:03.269203   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:04:03.269237   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:04:03.269249   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:04:03.269297   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:04:03.269315   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:04:03.269325   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:04:03.269338   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:04:03.269353   27131 main.go:141] libmachine: (ha-844661-m02) Calling .PreCreateCheck
	I1105 18:04:03.269514   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:03.269893   27131 main.go:141] libmachine: Creating machine...
	I1105 18:04:03.269906   27131 main.go:141] libmachine: (ha-844661-m02) Calling .Create
	I1105 18:04:03.270065   27131 main.go:141] libmachine: (ha-844661-m02) Creating KVM machine...
	I1105 18:04:03.271308   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found existing default KVM network
	I1105 18:04:03.271402   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found existing private KVM network mk-ha-844661
	I1105 18:04:03.271535   27131 main.go:141] libmachine: (ha-844661-m02) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 ...
	I1105 18:04:03.271561   27131 main.go:141] libmachine: (ha-844661-m02) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:04:03.271623   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.271523   27490 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:04:03.271709   27131 main.go:141] libmachine: (ha-844661-m02) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:04:03.505902   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.505765   27490 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa...
	I1105 18:04:03.597676   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.597557   27490 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/ha-844661-m02.rawdisk...
	I1105 18:04:03.597706   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Writing magic tar header
	I1105 18:04:03.597716   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Writing SSH key tar header
	I1105 18:04:03.597724   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.597692   27490 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 ...
	I1105 18:04:03.597812   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02
	I1105 18:04:03.597845   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:04:03.597903   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:04:03.597916   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 (perms=drwx------)
	I1105 18:04:03.597939   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:04:03.597948   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:04:03.597957   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:04:03.597965   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:04:03.597973   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:04:03.597977   27131 main.go:141] libmachine: (ha-844661-m02) Creating domain...
	I1105 18:04:03.598013   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:04:03.598038   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:04:03.598049   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:04:03.598061   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home
	I1105 18:04:03.598072   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Skipping /home - not owner
	I1105 18:04:03.598898   27131 main.go:141] libmachine: (ha-844661-m02) define libvirt domain using xml: 
	I1105 18:04:03.598916   27131 main.go:141] libmachine: (ha-844661-m02) <domain type='kvm'>
	I1105 18:04:03.598925   27131 main.go:141] libmachine: (ha-844661-m02)   <name>ha-844661-m02</name>
	I1105 18:04:03.598932   27131 main.go:141] libmachine: (ha-844661-m02)   <memory unit='MiB'>2200</memory>
	I1105 18:04:03.598941   27131 main.go:141] libmachine: (ha-844661-m02)   <vcpu>2</vcpu>
	I1105 18:04:03.598947   27131 main.go:141] libmachine: (ha-844661-m02)   <features>
	I1105 18:04:03.598959   27131 main.go:141] libmachine: (ha-844661-m02)     <acpi/>
	I1105 18:04:03.598965   27131 main.go:141] libmachine: (ha-844661-m02)     <apic/>
	I1105 18:04:03.598984   27131 main.go:141] libmachine: (ha-844661-m02)     <pae/>
	I1105 18:04:03.598993   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599024   27131 main.go:141] libmachine: (ha-844661-m02)   </features>
	I1105 18:04:03.599044   27131 main.go:141] libmachine: (ha-844661-m02)   <cpu mode='host-passthrough'>
	I1105 18:04:03.599055   27131 main.go:141] libmachine: (ha-844661-m02)   
	I1105 18:04:03.599061   27131 main.go:141] libmachine: (ha-844661-m02)   </cpu>
	I1105 18:04:03.599069   27131 main.go:141] libmachine: (ha-844661-m02)   <os>
	I1105 18:04:03.599077   27131 main.go:141] libmachine: (ha-844661-m02)     <type>hvm</type>
	I1105 18:04:03.599086   27131 main.go:141] libmachine: (ha-844661-m02)     <boot dev='cdrom'/>
	I1105 18:04:03.599093   27131 main.go:141] libmachine: (ha-844661-m02)     <boot dev='hd'/>
	I1105 18:04:03.599109   27131 main.go:141] libmachine: (ha-844661-m02)     <bootmenu enable='no'/>
	I1105 18:04:03.599120   27131 main.go:141] libmachine: (ha-844661-m02)   </os>
	I1105 18:04:03.599128   27131 main.go:141] libmachine: (ha-844661-m02)   <devices>
	I1105 18:04:03.599142   27131 main.go:141] libmachine: (ha-844661-m02)     <disk type='file' device='cdrom'>
	I1105 18:04:03.599158   27131 main.go:141] libmachine: (ha-844661-m02)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/boot2docker.iso'/>
	I1105 18:04:03.599168   27131 main.go:141] libmachine: (ha-844661-m02)       <target dev='hdc' bus='scsi'/>
	I1105 18:04:03.599177   27131 main.go:141] libmachine: (ha-844661-m02)       <readonly/>
	I1105 18:04:03.599191   27131 main.go:141] libmachine: (ha-844661-m02)     </disk>
	I1105 18:04:03.599203   27131 main.go:141] libmachine: (ha-844661-m02)     <disk type='file' device='disk'>
	I1105 18:04:03.599219   27131 main.go:141] libmachine: (ha-844661-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:04:03.599234   27131 main.go:141] libmachine: (ha-844661-m02)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/ha-844661-m02.rawdisk'/>
	I1105 18:04:03.599245   27131 main.go:141] libmachine: (ha-844661-m02)       <target dev='hda' bus='virtio'/>
	I1105 18:04:03.599254   27131 main.go:141] libmachine: (ha-844661-m02)     </disk>
	I1105 18:04:03.599264   27131 main.go:141] libmachine: (ha-844661-m02)     <interface type='network'>
	I1105 18:04:03.599277   27131 main.go:141] libmachine: (ha-844661-m02)       <source network='mk-ha-844661'/>
	I1105 18:04:03.599295   27131 main.go:141] libmachine: (ha-844661-m02)       <model type='virtio'/>
	I1105 18:04:03.599306   27131 main.go:141] libmachine: (ha-844661-m02)     </interface>
	I1105 18:04:03.599316   27131 main.go:141] libmachine: (ha-844661-m02)     <interface type='network'>
	I1105 18:04:03.599328   27131 main.go:141] libmachine: (ha-844661-m02)       <source network='default'/>
	I1105 18:04:03.599336   27131 main.go:141] libmachine: (ha-844661-m02)       <model type='virtio'/>
	I1105 18:04:03.599346   27131 main.go:141] libmachine: (ha-844661-m02)     </interface>
	I1105 18:04:03.599360   27131 main.go:141] libmachine: (ha-844661-m02)     <serial type='pty'>
	I1105 18:04:03.599371   27131 main.go:141] libmachine: (ha-844661-m02)       <target port='0'/>
	I1105 18:04:03.599379   27131 main.go:141] libmachine: (ha-844661-m02)     </serial>
	I1105 18:04:03.599388   27131 main.go:141] libmachine: (ha-844661-m02)     <console type='pty'>
	I1105 18:04:03.599395   27131 main.go:141] libmachine: (ha-844661-m02)       <target type='serial' port='0'/>
	I1105 18:04:03.599405   27131 main.go:141] libmachine: (ha-844661-m02)     </console>
	I1105 18:04:03.599414   27131 main.go:141] libmachine: (ha-844661-m02)     <rng model='virtio'>
	I1105 18:04:03.599426   27131 main.go:141] libmachine: (ha-844661-m02)       <backend model='random'>/dev/random</backend>
	I1105 18:04:03.599433   27131 main.go:141] libmachine: (ha-844661-m02)     </rng>
	I1105 18:04:03.599441   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599450   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599458   27131 main.go:141] libmachine: (ha-844661-m02)   </devices>
	I1105 18:04:03.599468   27131 main.go:141] libmachine: (ha-844661-m02) </domain>
	I1105 18:04:03.599478   27131 main.go:141] libmachine: (ha-844661-m02) 
	I1105 18:04:03.606202   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:bc:44:b3 in network default
	I1105 18:04:03.606844   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring networks are active...
	I1105 18:04:03.606873   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:03.607579   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring network default is active
	I1105 18:04:03.607877   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring network mk-ha-844661 is active
	I1105 18:04:03.608339   27131 main.go:141] libmachine: (ha-844661-m02) Getting domain xml...
	I1105 18:04:03.609124   27131 main.go:141] libmachine: (ha-844661-m02) Creating domain...
	I1105 18:04:04.804854   27131 main.go:141] libmachine: (ha-844661-m02) Waiting to get IP...
	I1105 18:04:04.805676   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:04.806067   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:04.806128   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:04.806059   27490 retry.go:31] will retry after 221.645511ms: waiting for machine to come up
	I1105 18:04:05.029505   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.029976   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.030010   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.029926   27490 retry.go:31] will retry after 382.599739ms: waiting for machine to come up
	I1105 18:04:05.414471   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.414907   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.414933   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.414864   27490 retry.go:31] will retry after 327.048237ms: waiting for machine to come up
	I1105 18:04:05.743302   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.743771   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.743804   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.743710   27490 retry.go:31] will retry after 518.430277ms: waiting for machine to come up
	I1105 18:04:06.263310   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:06.263829   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:06.263853   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:06.263789   27490 retry.go:31] will retry after 629.481848ms: waiting for machine to come up
	I1105 18:04:06.894494   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:06.895089   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:06.895118   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:06.895038   27490 retry.go:31] will retry after 880.755684ms: waiting for machine to come up
	I1105 18:04:07.777105   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:07.777585   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:07.777629   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:07.777517   27490 retry.go:31] will retry after 728.781586ms: waiting for machine to come up
	I1105 18:04:08.507833   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:08.508322   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:08.508350   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:08.508268   27490 retry.go:31] will retry after 1.405343367s: waiting for machine to come up
	I1105 18:04:09.915737   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:09.916175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:09.916206   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:09.916130   27490 retry.go:31] will retry after 1.614277424s: waiting for machine to come up
	I1105 18:04:11.532132   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:11.532606   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:11.532651   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:11.532528   27490 retry.go:31] will retry after 2.182290087s: waiting for machine to come up
	I1105 18:04:13.716671   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:13.717064   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:13.717090   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:13.717036   27490 retry.go:31] will retry after 2.181711488s: waiting for machine to come up
	I1105 18:04:15.901246   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:15.901742   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:15.901769   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:15.901678   27490 retry.go:31] will retry after 3.553887492s: waiting for machine to come up
	I1105 18:04:19.457631   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:19.458252   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:19.458280   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:19.458200   27490 retry.go:31] will retry after 2.842714356s: waiting for machine to come up
	I1105 18:04:22.304175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:22.304555   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:22.304577   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:22.304516   27490 retry.go:31] will retry after 4.429177675s: waiting for machine to come up
	I1105 18:04:26.738445   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.738953   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has current primary IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.739021   27131 main.go:141] libmachine: (ha-844661-m02) Found IP for machine: 192.168.39.38
	I1105 18:04:26.739034   27131 main.go:141] libmachine: (ha-844661-m02) Reserving static IP address...
	I1105 18:04:26.739350   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find host DHCP lease matching {name: "ha-844661-m02", mac: "52:54:00:46:71:ad", ip: "192.168.39.38"} in network mk-ha-844661
	I1105 18:04:26.812299   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Getting to WaitForSSH function...
	I1105 18:04:26.812324   27131 main.go:141] libmachine: (ha-844661-m02) Reserved static IP address: 192.168.39.38
	I1105 18:04:26.812336   27131 main.go:141] libmachine: (ha-844661-m02) Waiting for SSH to be available...
	I1105 18:04:26.815175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.815513   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661
	I1105 18:04:26.815540   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find defined IP address of network mk-ha-844661 interface with MAC address 52:54:00:46:71:ad
	I1105 18:04:26.815668   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH client type: external
	I1105 18:04:26.815699   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa (-rw-------)
	I1105 18:04:26.815752   27131 main.go:141] libmachine: (ha-844661-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:04:26.815781   27131 main.go:141] libmachine: (ha-844661-m02) DBG | About to run SSH command:
	I1105 18:04:26.815798   27131 main.go:141] libmachine: (ha-844661-m02) DBG | exit 0
	I1105 18:04:26.819693   27131 main.go:141] libmachine: (ha-844661-m02) DBG | SSH cmd err, output: exit status 255: 
	I1105 18:04:26.819710   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1105 18:04:26.819733   27131 main.go:141] libmachine: (ha-844661-m02) DBG | command : exit 0
	I1105 18:04:26.819747   27131 main.go:141] libmachine: (ha-844661-m02) DBG | err     : exit status 255
	I1105 18:04:26.819758   27131 main.go:141] libmachine: (ha-844661-m02) DBG | output  : 
	I1105 18:04:29.821203   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Getting to WaitForSSH function...
	I1105 18:04:29.823337   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.823729   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:29.823762   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.823872   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH client type: external
	I1105 18:04:29.823894   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa (-rw-------)
	I1105 18:04:29.823922   27131 main.go:141] libmachine: (ha-844661-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:04:29.823940   27131 main.go:141] libmachine: (ha-844661-m02) DBG | About to run SSH command:
	I1105 18:04:29.823952   27131 main.go:141] libmachine: (ha-844661-m02) DBG | exit 0
	I1105 18:04:29.951085   27131 main.go:141] libmachine: (ha-844661-m02) DBG | SSH cmd err, output: <nil>: 
	I1105 18:04:29.951342   27131 main.go:141] libmachine: (ha-844661-m02) KVM machine creation complete!
	I1105 18:04:29.951700   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:29.952363   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:29.952587   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:29.952760   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:04:29.952794   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetState
	I1105 18:04:29.954134   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:04:29.954148   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:04:29.954153   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:04:29.954158   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:29.956382   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.956701   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:29.956727   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.956885   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:29.957041   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:29.957158   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:29.957245   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:29.957384   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:29.957587   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:29.957598   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:04:30.062109   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:04:30.062134   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:04:30.062144   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.064857   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.065391   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.065423   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.065611   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.065805   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.065970   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.066128   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.066292   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.066496   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.066512   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:04:30.175484   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:04:30.175559   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:04:30.175573   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:04:30.175583   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.175860   27131 buildroot.go:166] provisioning hostname "ha-844661-m02"
	I1105 18:04:30.175892   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.176101   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.178534   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.178884   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.178952   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.179036   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.179212   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.179364   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.179519   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.179693   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.179914   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.179935   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661-m02 && echo "ha-844661-m02" | sudo tee /etc/hostname
	I1105 18:04:30.302286   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661-m02
	
	I1105 18:04:30.302313   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.305041   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.305376   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.305397   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.305565   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.305735   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.305864   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.306027   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.306153   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.306345   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.306368   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:04:30.418880   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:04:30.418913   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:04:30.418933   27131 buildroot.go:174] setting up certificates
	I1105 18:04:30.418944   27131 provision.go:84] configureAuth start
	I1105 18:04:30.418958   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.419230   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:30.421818   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.422198   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.422218   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.422357   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.424553   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.424893   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.424934   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.425058   27131 provision.go:143] copyHostCerts
	I1105 18:04:30.425085   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:04:30.425123   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:04:30.425135   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:04:30.425209   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:04:30.425294   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:04:30.425312   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:04:30.425316   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:04:30.425339   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:04:30.425392   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:04:30.425411   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:04:30.425417   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:04:30.425437   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:04:30.425500   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661-m02 san=[127.0.0.1 192.168.39.38 ha-844661-m02 localhost minikube]
	I1105 18:04:30.669687   27131 provision.go:177] copyRemoteCerts
	I1105 18:04:30.669745   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:04:30.669767   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.672398   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.672764   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.672792   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.672964   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.673166   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.673319   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.673440   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:30.757634   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:04:30.757707   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:04:30.779929   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:04:30.779991   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:04:30.802282   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:04:30.802340   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:04:30.824080   27131 provision.go:87] duration metric: took 405.122043ms to configureAuth
	I1105 18:04:30.824105   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:04:30.824267   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:30.824337   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.826767   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.827187   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.827210   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.827374   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.827574   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.827761   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.827911   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.828074   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.828241   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.828257   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:04:31.054134   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:04:31.054167   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:04:31.054177   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetURL
	I1105 18:04:31.055397   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using libvirt version 6000000
	I1105 18:04:31.057579   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.057909   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.057942   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.058035   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:04:31.058055   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:04:31.058063   27131 client.go:171] duration metric: took 27.788882282s to LocalClient.Create
	I1105 18:04:31.058089   27131 start.go:167] duration metric: took 27.788944247s to libmachine.API.Create "ha-844661"
	I1105 18:04:31.058102   27131 start.go:293] postStartSetup for "ha-844661-m02" (driver="kvm2")
	I1105 18:04:31.058116   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:04:31.058140   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.058392   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:04:31.058416   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.060812   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.061181   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.061207   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.061372   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.061520   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.061638   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.061750   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.141343   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:04:31.145282   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:04:31.145305   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:04:31.145386   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:04:31.145475   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:04:31.145487   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:04:31.145583   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:04:31.154867   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:04:31.177214   27131 start.go:296] duration metric: took 119.098287ms for postStartSetup
	I1105 18:04:31.177266   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:31.177795   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:31.180218   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.180581   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.180609   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.180893   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:31.181127   27131 start.go:128] duration metric: took 27.931509235s to createHost
	I1105 18:04:31.181151   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.183589   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.183931   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.183977   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.184093   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.184255   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.184473   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.184627   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.184776   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:31.184927   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:31.184936   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:04:31.291832   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829871.274251077
	
	I1105 18:04:31.291862   27131 fix.go:216] guest clock: 1730829871.274251077
	I1105 18:04:31.291873   27131 fix.go:229] Guest: 2024-11-05 18:04:31.274251077 +0000 UTC Remote: 2024-11-05 18:04:31.181141215 +0000 UTC m=+70.565834196 (delta=93.109862ms)
	I1105 18:04:31.291893   27131 fix.go:200] guest clock delta is within tolerance: 93.109862ms
	I1105 18:04:31.291902   27131 start.go:83] releasing machines lock for "ha-844661-m02", held for 28.042391542s
	I1105 18:04:31.291933   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.292188   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:31.294847   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.295152   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.295182   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.297372   27131 out.go:177] * Found network options:
	I1105 18:04:31.298882   27131 out.go:177]   - NO_PROXY=192.168.39.48
	W1105 18:04:31.300182   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:04:31.300214   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.300744   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.300953   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.301049   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:04:31.301078   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	W1105 18:04:31.301139   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:04:31.301229   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:04:31.301249   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.303834   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304115   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304147   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.304164   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304340   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.304518   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.304656   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.304683   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304705   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.304817   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.304875   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.304966   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.305123   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.305293   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.537813   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:04:31.543318   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:04:31.543380   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:04:31.558192   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:04:31.558214   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:04:31.558265   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:04:31.574444   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:04:31.588020   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:04:31.588073   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:04:31.601225   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:04:31.614872   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:04:31.742673   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:04:31.906474   27131 docker.go:233] disabling docker service ...
	I1105 18:04:31.906547   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:04:31.920407   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:04:31.932829   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:04:32.065646   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:04:32.198693   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:04:32.211636   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:04:32.228537   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:04:32.228604   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.238359   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:04:32.238426   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.248245   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.258019   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.267772   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:04:32.277903   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.287745   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.304428   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.315166   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:04:32.324687   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:04:32.324739   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:04:32.338701   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:04:32.349299   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:32.473469   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:04:32.562263   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:04:32.562341   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:04:32.567966   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:04:32.568012   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:04:32.571415   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:04:32.608501   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:04:32.608591   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:04:32.636314   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:04:32.664649   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:04:32.666073   27131 out.go:177]   - env NO_PROXY=192.168.39.48
	I1105 18:04:32.667578   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:32.670054   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:32.670404   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:32.670434   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:32.670640   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:04:32.675107   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:04:32.687100   27131 mustload.go:65] Loading cluster: ha-844661
	I1105 18:04:32.687313   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:32.687563   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:32.687614   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:32.702173   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I1105 18:04:32.702544   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:32.703040   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:32.703059   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:32.703356   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:32.703527   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:32.705121   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:32.705395   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:32.705427   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:32.719590   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I1105 18:04:32.719963   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:32.720450   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:32.720471   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:32.720753   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:32.720928   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:32.721076   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.38
	I1105 18:04:32.721087   27131 certs.go:194] generating shared ca certs ...
	I1105 18:04:32.721099   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.721216   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:04:32.721253   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:04:32.721262   27131 certs.go:256] generating profile certs ...
	I1105 18:04:32.721325   27131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:04:32.721348   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8
	I1105 18:04:32.721359   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.254]
	I1105 18:04:32.817294   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 ...
	I1105 18:04:32.817319   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8: {Name:mk45feacdbeaf35fb15921aeeafdbedf19f7f2ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.817474   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8 ...
	I1105 18:04:32.817487   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8: {Name:mkf0dcf762cb289770c94346689eba9d112e92a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.817551   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:04:32.817676   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:04:32.817799   27131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:04:32.817813   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:04:32.817827   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:04:32.817838   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:04:32.817853   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:04:32.817867   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:04:32.817879   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:04:32.817890   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:04:32.817899   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:04:32.817954   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:04:32.817983   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:04:32.817992   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:04:32.818014   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:04:32.818034   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:04:32.818055   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:04:32.818093   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:04:32.818118   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:04:32.818132   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:04:32.818145   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:32.818175   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:32.821627   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:32.822087   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:32.822115   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:32.822324   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:32.822514   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:32.822635   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:32.822754   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:32.895384   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:04:32.901151   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:04:32.911563   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:04:32.916135   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1105 18:04:32.926023   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:04:32.929795   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:04:32.939479   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:04:32.943460   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:04:32.953743   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:04:32.957464   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:04:32.967126   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:04:32.971370   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 18:04:32.981265   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:04:33.005948   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:04:33.028537   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:04:33.051691   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:04:33.077296   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 18:04:33.099924   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:04:33.122118   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:04:33.144496   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:04:33.167061   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:04:33.189719   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:04:33.212311   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:04:33.234431   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:04:33.249569   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1105 18:04:33.264947   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:04:33.280382   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:04:33.295047   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:04:33.310658   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 18:04:33.325227   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:04:33.340438   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:04:33.345637   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:04:33.355163   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.359277   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.359332   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.364640   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:04:33.374197   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:04:33.383883   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.388205   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.388269   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.393534   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:04:33.403611   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:04:33.413496   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.417522   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.417572   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.422911   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:04:33.432783   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:04:33.436475   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:04:33.436531   27131 kubeadm.go:934] updating node {m02 192.168.39.38 8443 v1.31.2 crio true true} ...
	I1105 18:04:33.436634   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:04:33.436658   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:04:33.436695   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:04:33.453065   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:04:33.453148   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:04:33.453221   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:04:33.462691   27131 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 18:04:33.462762   27131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 18:04:33.472553   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 18:04:33.472563   27131 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1105 18:04:33.472583   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:04:33.472584   27131 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1105 18:04:33.472655   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:04:33.477105   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 18:04:33.477133   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 18:04:34.400283   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:04:34.400361   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:04:34.405010   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 18:04:34.405045   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 18:04:34.538786   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:04:34.578282   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:04:34.578382   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:04:34.588498   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 18:04:34.588540   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 18:04:34.951438   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:04:34.960448   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1105 18:04:34.976680   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:04:34.992424   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:04:35.007877   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:04:35.011593   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:04:35.023033   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:35.153794   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:04:35.171325   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:35.171790   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:35.171844   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:35.187008   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I1105 18:04:35.187511   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:35.188000   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:35.188021   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:35.188401   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:35.188593   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:35.188755   27131 start.go:317] joinCluster: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:04:35.188861   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 18:04:35.188876   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:35.192373   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:35.193007   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:35.193036   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:35.193153   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:35.193322   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:35.193493   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:35.193633   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:35.352325   27131 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:35.352369   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token io85g1.ce9beps1a5sdfopc --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m02 --control-plane --apiserver-advertise-address=192.168.39.38 --apiserver-bind-port=8443"
	I1105 18:04:56.900009   27131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token io85g1.ce9beps1a5sdfopc --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m02 --control-plane --apiserver-advertise-address=192.168.39.38 --apiserver-bind-port=8443": (21.547609543s)
	I1105 18:04:56.900049   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 18:04:57.434153   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661-m02 minikube.k8s.io/updated_at=2024_11_05T18_04_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=false
	I1105 18:04:57.562849   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844661-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 18:04:57.694503   27131 start.go:319] duration metric: took 22.505743601s to joinCluster
	I1105 18:04:57.694592   27131 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:57.694912   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:57.695940   27131 out.go:177] * Verifying Kubernetes components...
	I1105 18:04:57.697102   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:57.983429   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:04:58.029548   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:58.029888   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:04:58.029994   27131 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.48:8443
	I1105 18:04:58.030271   27131 node_ready.go:35] waiting up to 6m0s for node "ha-844661-m02" to be "Ready" ...
	I1105 18:04:58.030407   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:58.030418   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:58.030429   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:58.030436   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:58.043836   27131 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1105 18:04:58.531097   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:58.531124   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:58.531135   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:58.531142   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:58.543712   27131 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1105 18:04:59.030878   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:59.030899   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:59.030908   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:59.030912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:59.035656   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:04:59.530596   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:59.530621   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:59.530633   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:59.530639   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:59.534120   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:00.030984   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:00.031006   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:00.031014   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:00.031017   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:00.034282   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:00.035034   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:00.530821   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:00.530846   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:00.530858   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:00.530864   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:00.536618   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:05:01.031310   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:01.031331   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:01.031340   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:01.031345   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:01.034641   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:01.530557   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:01.530578   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:01.530588   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:01.530595   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:01.539049   27131 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1105 18:05:02.031172   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:02.031197   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:02.031206   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:02.031210   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:02.034664   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:02.035295   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:02.531134   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:02.531158   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:02.531168   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:02.531173   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:02.534691   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:03.030649   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:03.030676   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:03.030684   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:03.030689   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:03.034294   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:03.531341   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:03.531362   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:03.531370   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:03.531374   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:03.534345   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:04.031389   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:04.031412   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:04.031420   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:04.031425   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:04.034432   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:04.531089   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:04.531121   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:04.531130   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:04.531134   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:04.534592   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:04.535270   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:05.030583   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:05.030606   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:05.030614   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:05.030618   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:05.034321   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:05.530714   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:05.530735   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:05.530744   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:05.530748   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:05.534305   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:06.031071   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:06.031093   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:06.031101   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:06.031105   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:06.034416   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:06.531473   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:06.531497   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:06.531506   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:06.531513   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:06.534473   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:07.030494   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:07.030518   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:07.030526   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:07.030530   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:07.033934   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:07.034429   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:07.530834   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:07.530861   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:07.530871   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:07.530876   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:07.534136   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:08.031065   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:08.031086   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:08.031094   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:08.031097   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:08.034490   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:08.530752   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:08.530774   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:08.530782   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:08.530787   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:08.534189   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:09.030956   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:09.030998   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:09.031007   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:09.031013   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:09.034514   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:09.035140   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:09.531531   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:09.531558   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:09.531569   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:09.531577   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:09.534682   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:10.030566   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:10.030603   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:10.030611   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:10.030615   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:10.034288   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:10.530760   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:10.530786   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:10.530797   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:10.530803   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:10.535094   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:11.031135   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:11.031156   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:11.031164   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:11.031167   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:11.034996   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:11.035590   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:11.530958   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:11.531025   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:11.531033   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:11.531036   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:11.534280   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:12.031192   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:12.031217   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:12.031226   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:12.031229   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:12.034799   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:12.530835   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:12.530859   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:12.530866   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:12.530871   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:12.535212   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:13.031138   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:13.031161   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:13.031168   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:13.031174   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:13.035138   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:13.035640   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:13.531336   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:13.531361   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:13.531372   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:13.531377   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:13.534343   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:14.031248   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:14.031269   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:14.031277   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:14.031280   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:14.034318   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:14.531121   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:14.531144   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:14.531152   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:14.531156   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:14.534522   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.031444   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:15.031471   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:15.031481   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:15.031485   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:15.035107   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.531231   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:15.531259   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:15.531295   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:15.531301   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:15.534694   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.535240   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:16.031143   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:16.031166   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:16.031174   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:16.031178   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:16.034542   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:16.530558   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:16.530585   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:16.530592   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:16.530596   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:16.534438   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.031334   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.031354   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.031363   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.031377   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.034859   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.530585   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.530609   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.530617   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.530621   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.534242   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.534822   27131 node_ready.go:49] node "ha-844661-m02" has status "Ready":"True"
	I1105 18:05:17.534842   27131 node_ready.go:38] duration metric: took 19.504524126s for node "ha-844661-m02" to be "Ready" ...
	I1105 18:05:17.534853   27131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:05:17.534933   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:17.534945   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.534955   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.534962   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.539957   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:17.545365   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.545456   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4bdfz
	I1105 18:05:17.545468   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.545479   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.545485   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.548667   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.549324   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.549340   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.549350   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.549355   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.552460   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.553059   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.553079   27131 pod_ready.go:82] duration metric: took 7.687809ms for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.553089   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.553143   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5g97
	I1105 18:05:17.553151   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.553157   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.553161   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.556133   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.556688   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.556701   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.556708   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.556711   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.559655   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.560102   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.560125   27131 pod_ready.go:82] duration metric: took 7.028626ms for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.560138   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.560192   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661
	I1105 18:05:17.560200   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.560207   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.560211   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.563041   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.563593   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.563605   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.563612   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.563617   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.566382   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.566799   27131 pod_ready.go:93] pod "etcd-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.566816   27131 pod_ready.go:82] duration metric: took 6.672004ms for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.566824   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.566881   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m02
	I1105 18:05:17.566890   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.566897   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.566901   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.570076   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.570614   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.570630   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.570639   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.570644   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.574134   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.574566   27131 pod_ready.go:93] pod "etcd-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.574584   27131 pod_ready.go:82] duration metric: took 7.753168ms for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.574604   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.730613   27131 request.go:632] Waited for 155.951288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:05:17.730716   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:05:17.730738   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.730750   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.730756   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.734460   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.931599   27131 request.go:632] Waited for 196.455308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.931691   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.931703   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.931714   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.931720   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.935472   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.936248   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.936270   27131 pod_ready.go:82] duration metric: took 361.658171ms for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.936283   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.131401   27131 request.go:632] Waited for 195.044956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:05:18.131499   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:05:18.131506   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.131514   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.131520   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.135482   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.331525   27131 request.go:632] Waited for 195.194468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:18.331593   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:18.331598   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.331605   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.331610   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.334692   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.335419   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:18.335438   27131 pod_ready.go:82] duration metric: took 399.143957ms for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.335449   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.530629   27131 request.go:632] Waited for 195.065538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:05:18.530715   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:05:18.530724   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.530734   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.530747   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.534793   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:18.731049   27131 request.go:632] Waited for 195.44458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:18.731128   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:18.731134   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.731143   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.731148   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.734646   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.735269   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:18.735297   27131 pod_ready.go:82] duration metric: took 399.840715ms for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.735311   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.931233   27131 request.go:632] Waited for 195.850053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:05:18.931303   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:05:18.931310   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.931320   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.931326   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.935301   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.131408   27131 request.go:632] Waited for 195.30965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.131471   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.131476   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.131483   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.131487   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.134983   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.135599   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.135639   27131 pod_ready.go:82] duration metric: took 400.298272ms for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.135650   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.330670   27131 request.go:632] Waited for 194.9293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:05:19.330729   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:05:19.330734   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.330741   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.330745   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.334278   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.531215   27131 request.go:632] Waited for 196.368669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:19.531275   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:19.531280   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.531287   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.531290   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.535032   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.535778   27131 pod_ready.go:93] pod "kube-proxy-pjpkh" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.535799   27131 pod_ready.go:82] duration metric: took 400.142488ms for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.535811   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.730859   27131 request.go:632] Waited for 194.981031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:05:19.730957   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:05:19.730981   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.730993   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.731003   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.734505   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.931630   27131 request.go:632] Waited for 196.356772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.931695   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.931703   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.931713   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.931721   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.934664   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:19.935138   27131 pod_ready.go:93] pod "kube-proxy-zsbfs" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.935158   27131 pod_ready.go:82] duration metric: took 399.338721ms for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.935171   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.131253   27131 request.go:632] Waited for 196.012842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:05:20.131339   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:05:20.131346   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.131354   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.131365   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.135136   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.331213   27131 request.go:632] Waited for 195.465792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:20.331270   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:20.331276   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.331283   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.331287   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.334310   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.334872   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:20.334894   27131 pod_ready.go:82] duration metric: took 399.711008ms for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.334908   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.531014   27131 request.go:632] Waited for 195.998146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:05:20.531072   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:05:20.531077   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.531084   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.531092   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.534503   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.731389   27131 request.go:632] Waited for 196.312857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:20.731476   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:20.731488   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.731496   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.731502   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.734866   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.735369   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:20.735387   27131 pod_ready.go:82] duration metric: took 400.467875ms for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.735398   27131 pod_ready.go:39] duration metric: took 3.200533347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:05:20.735415   27131 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:05:20.735464   27131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:05:20.751422   27131 api_server.go:72] duration metric: took 23.056783291s to wait for apiserver process to appear ...
	I1105 18:05:20.751455   27131 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:05:20.751507   27131 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1105 18:05:20.755872   27131 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1105 18:05:20.755957   27131 round_trippers.go:463] GET https://192.168.39.48:8443/version
	I1105 18:05:20.755969   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.755980   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.755990   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.756842   27131 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 18:05:20.756943   27131 api_server.go:141] control plane version: v1.31.2
	I1105 18:05:20.756968   27131 api_server.go:131] duration metric: took 5.494459ms to wait for apiserver health ...
	I1105 18:05:20.756978   27131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:05:20.930580   27131 request.go:632] Waited for 173.520285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:20.930658   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:20.930664   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.930672   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.930676   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.935815   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:05:20.939904   27131 system_pods.go:59] 17 kube-system pods found
	I1105 18:05:20.939939   27131 system_pods.go:61] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:05:20.939945   27131 system_pods.go:61] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:05:20.939949   27131 system_pods.go:61] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:05:20.939952   27131 system_pods.go:61] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:05:20.939955   27131 system_pods.go:61] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:05:20.939959   27131 system_pods.go:61] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:05:20.939962   27131 system_pods.go:61] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:05:20.939965   27131 system_pods.go:61] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:05:20.939968   27131 system_pods.go:61] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:05:20.939977   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:05:20.939981   27131 system_pods.go:61] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:05:20.939984   27131 system_pods.go:61] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:05:20.939989   27131 system_pods.go:61] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:05:20.939992   27131 system_pods.go:61] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:05:20.939997   27131 system_pods.go:61] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:05:20.940003   27131 system_pods.go:61] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:05:20.940006   27131 system_pods.go:61] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:05:20.940012   27131 system_pods.go:74] duration metric: took 183.024873ms to wait for pod list to return data ...
	I1105 18:05:20.940022   27131 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:05:21.131476   27131 request.go:632] Waited for 191.3776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:05:21.131535   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:05:21.131540   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.131548   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.131552   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.135052   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:21.135309   27131 default_sa.go:45] found service account: "default"
	I1105 18:05:21.135328   27131 default_sa.go:55] duration metric: took 195.299598ms for default service account to be created ...
	I1105 18:05:21.135339   27131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:05:21.330735   27131 request.go:632] Waited for 195.314096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:21.330794   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:21.330799   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.330807   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.330810   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.335501   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:21.339693   27131 system_pods.go:86] 17 kube-system pods found
	I1105 18:05:21.339720   27131 system_pods.go:89] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:05:21.339726   27131 system_pods.go:89] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:05:21.339731   27131 system_pods.go:89] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:05:21.339734   27131 system_pods.go:89] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:05:21.339738   27131 system_pods.go:89] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:05:21.339741   27131 system_pods.go:89] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:05:21.339745   27131 system_pods.go:89] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:05:21.339748   27131 system_pods.go:89] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:05:21.339751   27131 system_pods.go:89] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:05:21.339755   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:05:21.339759   27131 system_pods.go:89] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:05:21.339762   27131 system_pods.go:89] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:05:21.339765   27131 system_pods.go:89] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:05:21.339769   27131 system_pods.go:89] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:05:21.339774   27131 system_pods.go:89] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:05:21.339779   27131 system_pods.go:89] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:05:21.339782   27131 system_pods.go:89] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:05:21.339788   27131 system_pods.go:126] duration metric: took 204.442408ms to wait for k8s-apps to be running ...
	I1105 18:05:21.339802   27131 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:05:21.339842   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:05:21.354615   27131 system_svc.go:56] duration metric: took 14.795984ms WaitForService to wait for kubelet
	I1105 18:05:21.354651   27131 kubeadm.go:582] duration metric: took 23.660015871s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:05:21.354696   27131 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:05:21.531068   27131 request.go:632] Waited for 176.291328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes
	I1105 18:05:21.531146   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes
	I1105 18:05:21.531151   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.531159   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.531164   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.534798   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:21.535495   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:05:21.535541   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:05:21.535562   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:05:21.535565   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:05:21.535570   27131 node_conditions.go:105] duration metric: took 180.868401ms to run NodePressure ...
	I1105 18:05:21.535585   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:05:21.535607   27131 start.go:255] writing updated cluster config ...
	I1105 18:05:21.537763   27131 out.go:201] 
	I1105 18:05:21.539166   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:21.539250   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:21.540660   27131 out.go:177] * Starting "ha-844661-m03" control-plane node in "ha-844661" cluster
	I1105 18:05:21.541637   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:05:21.541660   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:05:21.541776   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:05:21.541788   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:05:21.541886   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:21.542068   27131 start.go:360] acquireMachinesLock for ha-844661-m03: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:05:21.542109   27131 start.go:364] duration metric: took 21.826µs to acquireMachinesLock for "ha-844661-m03"
	I1105 18:05:21.542124   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:05:21.542209   27131 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1105 18:05:21.543860   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:05:21.543943   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:21.543975   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:21.559283   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1105 18:05:21.559671   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:21.560085   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:21.560107   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:21.560440   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:21.560618   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:21.560762   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:21.560967   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:05:21.560994   27131 client.go:168] LocalClient.Create starting
	I1105 18:05:21.561031   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:05:21.561079   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:05:21.561096   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:05:21.561164   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:05:21.561192   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:05:21.561207   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:05:21.561232   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:05:21.561244   27131 main.go:141] libmachine: (ha-844661-m03) Calling .PreCreateCheck
	I1105 18:05:21.561482   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:21.561876   27131 main.go:141] libmachine: Creating machine...
	I1105 18:05:21.561887   27131 main.go:141] libmachine: (ha-844661-m03) Calling .Create
	I1105 18:05:21.562039   27131 main.go:141] libmachine: (ha-844661-m03) Creating KVM machine...
	I1105 18:05:21.563199   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found existing default KVM network
	I1105 18:05:21.563316   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found existing private KVM network mk-ha-844661
	I1105 18:05:21.563415   27131 main.go:141] libmachine: (ha-844661-m03) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 ...
	I1105 18:05:21.563439   27131 main.go:141] libmachine: (ha-844661-m03) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:05:21.563512   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.563393   27902 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:05:21.563587   27131 main.go:141] libmachine: (ha-844661-m03) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:05:21.796365   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.796229   27902 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa...
	I1105 18:05:21.882674   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.882568   27902 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/ha-844661-m03.rawdisk...
	I1105 18:05:21.882702   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Writing magic tar header
	I1105 18:05:21.882713   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Writing SSH key tar header
	I1105 18:05:21.882768   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.882708   27902 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 ...
	I1105 18:05:21.882834   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03
	I1105 18:05:21.882863   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 (perms=drwx------)
	I1105 18:05:21.882876   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:05:21.882896   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:05:21.882908   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:05:21.882922   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:05:21.882944   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:05:21.882956   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:05:21.883017   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home
	I1105 18:05:21.883034   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Skipping /home - not owner
	I1105 18:05:21.883044   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:05:21.883057   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:05:21.883070   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:05:21.883081   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:05:21.883089   27131 main.go:141] libmachine: (ha-844661-m03) Creating domain...
	I1105 18:05:21.883931   27131 main.go:141] libmachine: (ha-844661-m03) define libvirt domain using xml: 
	I1105 18:05:21.883952   27131 main.go:141] libmachine: (ha-844661-m03) <domain type='kvm'>
	I1105 18:05:21.883976   27131 main.go:141] libmachine: (ha-844661-m03)   <name>ha-844661-m03</name>
	I1105 18:05:21.883997   27131 main.go:141] libmachine: (ha-844661-m03)   <memory unit='MiB'>2200</memory>
	I1105 18:05:21.884009   27131 main.go:141] libmachine: (ha-844661-m03)   <vcpu>2</vcpu>
	I1105 18:05:21.884020   27131 main.go:141] libmachine: (ha-844661-m03)   <features>
	I1105 18:05:21.884028   27131 main.go:141] libmachine: (ha-844661-m03)     <acpi/>
	I1105 18:05:21.884038   27131 main.go:141] libmachine: (ha-844661-m03)     <apic/>
	I1105 18:05:21.884046   27131 main.go:141] libmachine: (ha-844661-m03)     <pae/>
	I1105 18:05:21.884056   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884078   27131 main.go:141] libmachine: (ha-844661-m03)   </features>
	I1105 18:05:21.884099   27131 main.go:141] libmachine: (ha-844661-m03)   <cpu mode='host-passthrough'>
	I1105 18:05:21.884109   27131 main.go:141] libmachine: (ha-844661-m03)   
	I1105 18:05:21.884119   27131 main.go:141] libmachine: (ha-844661-m03)   </cpu>
	I1105 18:05:21.884129   27131 main.go:141] libmachine: (ha-844661-m03)   <os>
	I1105 18:05:21.884134   27131 main.go:141] libmachine: (ha-844661-m03)     <type>hvm</type>
	I1105 18:05:21.884144   27131 main.go:141] libmachine: (ha-844661-m03)     <boot dev='cdrom'/>
	I1105 18:05:21.884151   27131 main.go:141] libmachine: (ha-844661-m03)     <boot dev='hd'/>
	I1105 18:05:21.884159   27131 main.go:141] libmachine: (ha-844661-m03)     <bootmenu enable='no'/>
	I1105 18:05:21.884169   27131 main.go:141] libmachine: (ha-844661-m03)   </os>
	I1105 18:05:21.884183   27131 main.go:141] libmachine: (ha-844661-m03)   <devices>
	I1105 18:05:21.884200   27131 main.go:141] libmachine: (ha-844661-m03)     <disk type='file' device='cdrom'>
	I1105 18:05:21.884216   27131 main.go:141] libmachine: (ha-844661-m03)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/boot2docker.iso'/>
	I1105 18:05:21.884227   27131 main.go:141] libmachine: (ha-844661-m03)       <target dev='hdc' bus='scsi'/>
	I1105 18:05:21.884237   27131 main.go:141] libmachine: (ha-844661-m03)       <readonly/>
	I1105 18:05:21.884245   27131 main.go:141] libmachine: (ha-844661-m03)     </disk>
	I1105 18:05:21.884252   27131 main.go:141] libmachine: (ha-844661-m03)     <disk type='file' device='disk'>
	I1105 18:05:21.884260   27131 main.go:141] libmachine: (ha-844661-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:05:21.884267   27131 main.go:141] libmachine: (ha-844661-m03)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/ha-844661-m03.rawdisk'/>
	I1105 18:05:21.884274   27131 main.go:141] libmachine: (ha-844661-m03)       <target dev='hda' bus='virtio'/>
	I1105 18:05:21.884279   27131 main.go:141] libmachine: (ha-844661-m03)     </disk>
	I1105 18:05:21.884289   27131 main.go:141] libmachine: (ha-844661-m03)     <interface type='network'>
	I1105 18:05:21.884295   27131 main.go:141] libmachine: (ha-844661-m03)       <source network='mk-ha-844661'/>
	I1105 18:05:21.884305   27131 main.go:141] libmachine: (ha-844661-m03)       <model type='virtio'/>
	I1105 18:05:21.884313   27131 main.go:141] libmachine: (ha-844661-m03)     </interface>
	I1105 18:05:21.884318   27131 main.go:141] libmachine: (ha-844661-m03)     <interface type='network'>
	I1105 18:05:21.884326   27131 main.go:141] libmachine: (ha-844661-m03)       <source network='default'/>
	I1105 18:05:21.884330   27131 main.go:141] libmachine: (ha-844661-m03)       <model type='virtio'/>
	I1105 18:05:21.884337   27131 main.go:141] libmachine: (ha-844661-m03)     </interface>
	I1105 18:05:21.884341   27131 main.go:141] libmachine: (ha-844661-m03)     <serial type='pty'>
	I1105 18:05:21.884347   27131 main.go:141] libmachine: (ha-844661-m03)       <target port='0'/>
	I1105 18:05:21.884351   27131 main.go:141] libmachine: (ha-844661-m03)     </serial>
	I1105 18:05:21.884358   27131 main.go:141] libmachine: (ha-844661-m03)     <console type='pty'>
	I1105 18:05:21.884363   27131 main.go:141] libmachine: (ha-844661-m03)       <target type='serial' port='0'/>
	I1105 18:05:21.884377   27131 main.go:141] libmachine: (ha-844661-m03)     </console>
	I1105 18:05:21.884395   27131 main.go:141] libmachine: (ha-844661-m03)     <rng model='virtio'>
	I1105 18:05:21.884408   27131 main.go:141] libmachine: (ha-844661-m03)       <backend model='random'>/dev/random</backend>
	I1105 18:05:21.884417   27131 main.go:141] libmachine: (ha-844661-m03)     </rng>
	I1105 18:05:21.884432   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884441   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884448   27131 main.go:141] libmachine: (ha-844661-m03)   </devices>
	I1105 18:05:21.884457   27131 main.go:141] libmachine: (ha-844661-m03) </domain>
	I1105 18:05:21.884464   27131 main.go:141] libmachine: (ha-844661-m03) 
	I1105 18:05:21.890775   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:13:05:59 in network default
	I1105 18:05:21.891360   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring networks are active...
	I1105 18:05:21.891380   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:21.892107   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring network default is active
	I1105 18:05:21.892388   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring network mk-ha-844661 is active
	I1105 18:05:21.892764   27131 main.go:141] libmachine: (ha-844661-m03) Getting domain xml...
	I1105 18:05:21.893494   27131 main.go:141] libmachine: (ha-844661-m03) Creating domain...
	I1105 18:05:23.118308   27131 main.go:141] libmachine: (ha-844661-m03) Waiting to get IP...
	I1105 18:05:23.119070   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.119438   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.119465   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.119424   27902 retry.go:31] will retry after 298.334175ms: waiting for machine to come up
	I1105 18:05:23.419032   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.419605   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.419622   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.419554   27902 retry.go:31] will retry after 273.113851ms: waiting for machine to come up
	I1105 18:05:23.693944   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.694349   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.694376   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.694317   27902 retry.go:31] will retry after 416.726009ms: waiting for machine to come up
	I1105 18:05:24.112851   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:24.113218   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:24.113249   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:24.113181   27902 retry.go:31] will retry after 551.953216ms: waiting for machine to come up
	I1105 18:05:24.666824   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:24.667304   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:24.667333   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:24.667249   27902 retry.go:31] will retry after 466.975145ms: waiting for machine to come up
	I1105 18:05:25.135836   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:25.136271   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:25.136292   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:25.136228   27902 retry.go:31] will retry after 589.586585ms: waiting for machine to come up
	I1105 18:05:25.726897   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:25.727480   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:25.727508   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:25.727434   27902 retry.go:31] will retry after 968.18251ms: waiting for machine to come up
	I1105 18:05:26.697257   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:26.697626   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:26.697652   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:26.697586   27902 retry.go:31] will retry after 1.127611463s: waiting for machine to come up
	I1105 18:05:27.826904   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:27.827312   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:27.827340   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:27.827258   27902 retry.go:31] will retry after 1.342205842s: waiting for machine to come up
	I1105 18:05:29.171618   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:29.172079   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:29.172146   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:29.172073   27902 retry.go:31] will retry after 1.974625708s: waiting for machine to come up
	I1105 18:05:31.148071   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:31.148482   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:31.148499   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:31.148434   27902 retry.go:31] will retry after 2.71055754s: waiting for machine to come up
	I1105 18:05:33.861975   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:33.862458   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:33.862483   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:33.862417   27902 retry.go:31] will retry after 3.509037885s: waiting for machine to come up
	I1105 18:05:37.373198   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:37.373748   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:37.373778   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:37.373690   27902 retry.go:31] will retry after 4.502442692s: waiting for machine to come up
	I1105 18:05:41.878135   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.878636   27131 main.go:141] libmachine: (ha-844661-m03) Found IP for machine: 192.168.39.52
	I1105 18:05:41.878665   27131 main.go:141] libmachine: (ha-844661-m03) Reserving static IP address...
	I1105 18:05:41.878678   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has current primary IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.879129   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find host DHCP lease matching {name: "ha-844661-m03", mac: "52:54:00:62:70:0e", ip: "192.168.39.52"} in network mk-ha-844661
	I1105 18:05:41.955281   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Getting to WaitForSSH function...
	I1105 18:05:41.955317   27131 main.go:141] libmachine: (ha-844661-m03) Reserved static IP address: 192.168.39.52
	I1105 18:05:41.955331   27131 main.go:141] libmachine: (ha-844661-m03) Waiting for SSH to be available...
	I1105 18:05:41.957358   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.957752   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:41.957781   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.957992   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using SSH client type: external
	I1105 18:05:41.958020   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa (-rw-------)
	I1105 18:05:41.958098   27131 main.go:141] libmachine: (ha-844661-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:05:41.958121   27131 main.go:141] libmachine: (ha-844661-m03) DBG | About to run SSH command:
	I1105 18:05:41.958159   27131 main.go:141] libmachine: (ha-844661-m03) DBG | exit 0
	I1105 18:05:42.086743   27131 main.go:141] libmachine: (ha-844661-m03) DBG | SSH cmd err, output: <nil>: 
	I1105 18:05:42.087041   27131 main.go:141] libmachine: (ha-844661-m03) KVM machine creation complete!
	I1105 18:05:42.087332   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:42.087854   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:42.088045   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:42.088232   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:05:42.088247   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetState
	I1105 18:05:42.089254   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:05:42.089266   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:05:42.089278   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:05:42.089283   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.091449   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.091761   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.091789   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.091901   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.092048   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.092179   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.092313   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.092495   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.092748   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.092763   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:05:42.206064   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:05:42.206086   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:05:42.206094   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.208351   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.208732   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.208750   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.208928   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.209072   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.209271   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.209444   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.209598   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.209769   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.209780   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:05:42.323709   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:05:42.323865   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:05:42.323878   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:05:42.323888   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.324267   27131 buildroot.go:166] provisioning hostname "ha-844661-m03"
	I1105 18:05:42.324297   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.324481   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.327505   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.327833   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.327862   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.328041   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.328248   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.328422   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.328544   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.328776   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.329027   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.329041   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661-m03 && echo "ha-844661-m03" | sudo tee /etc/hostname
	I1105 18:05:42.457338   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661-m03
	
	I1105 18:05:42.457368   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.460928   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.461292   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.461321   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.461510   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.461681   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.461835   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.461969   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.462135   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.462324   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.462348   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:05:42.583532   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:05:42.583564   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:05:42.583578   27131 buildroot.go:174] setting up certificates
	I1105 18:05:42.583593   27131 provision.go:84] configureAuth start
	I1105 18:05:42.583602   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.583890   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:42.586719   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.587067   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.587099   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.587290   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.589736   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.590192   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.590227   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.590360   27131 provision.go:143] copyHostCerts
	I1105 18:05:42.590408   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:05:42.590449   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:05:42.590459   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:05:42.590538   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:05:42.590622   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:05:42.590645   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:05:42.590652   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:05:42.590675   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:05:42.590726   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:05:42.590742   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:05:42.590748   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:05:42.590768   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:05:42.590820   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661-m03 san=[127.0.0.1 192.168.39.52 ha-844661-m03 localhost minikube]
	I1105 18:05:42.925752   27131 provision.go:177] copyRemoteCerts
	I1105 18:05:42.925808   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:05:42.925833   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.928689   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.929066   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.929101   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.929303   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.929489   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.929666   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.929803   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.020278   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:05:43.020356   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:05:43.044012   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:05:43.044085   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:05:43.067535   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:05:43.067615   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:05:43.091055   27131 provision.go:87] duration metric: took 507.451446ms to configureAuth
	I1105 18:05:43.091084   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:05:43.091353   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:43.091482   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.094765   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.095169   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.095193   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.095384   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.095574   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.095740   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.095896   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.096067   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:43.096263   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:43.096284   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:05:43.325666   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:05:43.325693   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:05:43.325711   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetURL
	I1105 18:05:43.326946   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using libvirt version 6000000
	I1105 18:05:43.329691   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.330121   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.330146   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.330327   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:05:43.330347   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:05:43.330356   27131 client.go:171] duration metric: took 21.769352405s to LocalClient.Create
	I1105 18:05:43.330393   27131 start.go:167] duration metric: took 21.769425686s to libmachine.API.Create "ha-844661"
	I1105 18:05:43.330407   27131 start.go:293] postStartSetup for "ha-844661-m03" (driver="kvm2")
	I1105 18:05:43.330422   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:05:43.330439   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.330671   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:05:43.330693   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.332887   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.333189   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.333218   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.333427   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.333597   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.333764   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.333891   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.421747   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:05:43.425946   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:05:43.425980   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:05:43.426048   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:05:43.426118   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:05:43.426127   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:05:43.426241   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:05:43.436295   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:05:43.461822   27131 start.go:296] duration metric: took 131.400624ms for postStartSetup
	I1105 18:05:43.461911   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:43.462559   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:43.465039   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.465395   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.465419   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.465660   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:43.465861   27131 start.go:128] duration metric: took 21.923641121s to createHost
	I1105 18:05:43.465891   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.468236   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.468751   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.468776   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.468993   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.469148   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.469288   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.469410   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.469542   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:43.469719   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:43.469729   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:05:43.583301   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829943.559053309
	
	I1105 18:05:43.583330   27131 fix.go:216] guest clock: 1730829943.559053309
	I1105 18:05:43.583338   27131 fix.go:229] Guest: 2024-11-05 18:05:43.559053309 +0000 UTC Remote: 2024-11-05 18:05:43.465876826 +0000 UTC m=+142.850569806 (delta=93.176483ms)
	I1105 18:05:43.583357   27131 fix.go:200] guest clock delta is within tolerance: 93.176483ms
	I1105 18:05:43.583365   27131 start.go:83] releasing machines lock for "ha-844661-m03", held for 22.041249603s
	I1105 18:05:43.583392   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.583670   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:43.586387   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.586835   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.586865   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.589174   27131 out.go:177] * Found network options:
	I1105 18:05:43.590513   27131 out.go:177]   - NO_PROXY=192.168.39.48,192.168.39.38
	W1105 18:05:43.591696   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:05:43.591728   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:05:43.591742   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592264   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592439   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592540   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:05:43.592583   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	W1105 18:05:43.592659   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:05:43.592686   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:05:43.592773   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:05:43.592798   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.595358   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595711   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.595743   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595763   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595936   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.596109   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.596235   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.596238   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.596260   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.596402   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.596401   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.596521   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.596667   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.596795   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.836071   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:05:43.841664   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:05:43.841742   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:05:43.858022   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:05:43.858050   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:05:43.858129   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:05:43.874613   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:05:43.888461   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:05:43.888526   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:05:43.901586   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:05:43.914516   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:05:44.022716   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:05:44.162802   27131 docker.go:233] disabling docker service ...
	I1105 18:05:44.162867   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:05:44.178520   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:05:44.190518   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:05:44.307326   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:05:44.440411   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:05:44.453238   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:05:44.471519   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:05:44.471573   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.481424   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:05:44.481492   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.491154   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.500794   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.511947   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:05:44.521660   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.531075   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.547126   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.557037   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:05:44.565707   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:05:44.565772   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:05:44.580225   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:05:44.590720   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:05:44.720733   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:05:44.813635   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:05:44.813712   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:05:44.818398   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:05:44.818453   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:05:44.821924   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:05:44.862340   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:05:44.862414   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:05:44.888088   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:05:44.915450   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:05:44.916959   27131 out.go:177]   - env NO_PROXY=192.168.39.48
	I1105 18:05:44.918290   27131 out.go:177]   - env NO_PROXY=192.168.39.48,192.168.39.38
	I1105 18:05:44.919504   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:44.921870   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:44.922342   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:44.922369   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:44.922579   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:05:44.926550   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:05:44.938321   27131 mustload.go:65] Loading cluster: ha-844661
	I1105 18:05:44.938602   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:44.939019   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:44.939070   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:44.954536   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45821
	I1105 18:05:44.955060   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:44.955556   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:44.955581   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:44.955872   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:44.956050   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:05:44.957611   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:05:44.957920   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:44.957971   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:44.973679   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33387
	I1105 18:05:44.974166   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:44.974646   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:44.974660   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:44.974951   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:44.975198   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:05:44.975390   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.52
	I1105 18:05:44.975402   27131 certs.go:194] generating shared ca certs ...
	I1105 18:05:44.975424   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:44.975543   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:05:44.975579   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:05:44.975587   27131 certs.go:256] generating profile certs ...
	I1105 18:05:44.975659   27131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:05:44.975685   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b
	I1105 18:05:44.975700   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.52 192.168.39.254]
	I1105 18:05:45.201266   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b ...
	I1105 18:05:45.201297   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b: {Name:mk528e0260fc30831e80a622836a2ff38ea38838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:45.201463   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b ...
	I1105 18:05:45.201476   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b: {Name:mkf6f5a9f3c5c5cd5e56be42a7f99d1f16c92ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:45.201544   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:05:45.201685   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:05:45.201845   27131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:05:45.201861   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:05:45.201877   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:05:45.201896   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:05:45.201914   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:05:45.201928   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:05:45.201942   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:05:45.201954   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:05:45.215059   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:05:45.215144   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:05:45.215186   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:05:45.215194   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:05:45.215215   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:05:45.215240   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:05:45.215272   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:05:45.215314   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:05:45.215350   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.215374   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.215398   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.215435   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:05:45.218425   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:45.218874   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:05:45.218901   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:45.219093   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:05:45.219284   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:05:45.219433   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:05:45.219544   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:05:45.291312   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:05:45.296113   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:05:45.309256   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:05:45.313268   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1105 18:05:45.324891   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:05:45.328601   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:05:45.339115   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:05:45.343326   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:05:45.353973   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:05:45.357652   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:05:45.367881   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:05:45.371920   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 18:05:45.381431   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:05:45.405521   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:05:45.428099   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:05:45.450896   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:05:45.472444   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1105 18:05:45.494567   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:05:45.518941   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:05:45.542679   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:05:45.565272   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:05:45.586847   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:05:45.609171   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:05:45.631071   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:05:45.647046   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1105 18:05:45.662643   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:05:45.677589   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:05:45.693263   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:05:45.708513   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 18:05:45.723904   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:05:45.739595   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:05:45.744988   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:05:45.754754   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.759038   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.759097   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.764843   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:05:45.774526   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:05:45.784026   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.788019   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.788066   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.793328   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:05:45.803282   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:05:45.813203   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.817364   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.817407   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.822692   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:05:45.832731   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:05:45.836652   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:05:45.836705   27131 kubeadm.go:934] updating node {m03 192.168.39.52 8443 v1.31.2 crio true true} ...
	I1105 18:05:45.836816   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:05:45.836851   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:05:45.836896   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:05:45.851973   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:05:45.852033   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:05:45.852095   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:05:45.861565   27131 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 18:05:45.861624   27131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 18:05:45.871179   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1105 18:05:45.871192   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 18:05:45.871218   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:05:45.871230   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:05:45.871246   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1105 18:05:45.871262   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:05:45.871285   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:05:45.871314   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:05:45.885118   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:05:45.885168   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 18:05:45.885198   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 18:05:45.885198   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 18:05:45.885201   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:05:45.885224   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 18:05:45.895722   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 18:05:45.895762   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 18:05:46.776289   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:05:46.785676   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1105 18:05:46.804664   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:05:46.823256   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:05:46.839659   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:05:46.843739   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:05:46.855127   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:05:46.984151   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:05:47.002930   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:05:47.003372   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:47.003427   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:47.019365   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I1105 18:05:47.020121   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:47.020574   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:47.020595   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:47.020908   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:47.021095   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:05:47.021355   27131 start.go:317] joinCluster: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:05:47.021508   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 18:05:47.021529   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:05:47.024802   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:47.025266   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:05:47.025301   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:47.025485   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:05:47.025649   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:05:47.025818   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:05:47.025989   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:05:47.187808   27131 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:05:47.187862   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ywlsrk.n1qe1uf11bwul667 --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m03 --control-plane --apiserver-advertise-address=192.168.39.52 --apiserver-bind-port=8443"
	I1105 18:06:08.756523   27131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ywlsrk.n1qe1uf11bwul667 --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m03 --control-plane --apiserver-advertise-address=192.168.39.52 --apiserver-bind-port=8443": (21.568638959s)
	I1105 18:06:08.756554   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 18:06:09.321152   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661-m03 minikube.k8s.io/updated_at=2024_11_05T18_06_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=false
	I1105 18:06:09.429932   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844661-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 18:06:09.553648   27131 start.go:319] duration metric: took 22.532294884s to joinCluster
	I1105 18:06:09.553756   27131 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:06:09.554141   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:09.555396   27131 out.go:177] * Verifying Kubernetes components...
	I1105 18:06:09.556678   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:06:09.771512   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:06:09.788145   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:06:09.788384   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:06:09.788445   27131 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.48:8443
	I1105 18:06:09.788700   27131 node_ready.go:35] waiting up to 6m0s for node "ha-844661-m03" to be "Ready" ...
	I1105 18:06:09.788799   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:09.788806   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:09.788814   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:09.788817   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:09.792219   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:10.289451   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:10.289477   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:10.289489   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:10.289494   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:10.292860   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:10.789577   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:10.789602   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:10.789615   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:10.789623   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:10.793572   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.289465   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:11.289484   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:11.289492   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:11.289498   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:11.292734   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.789023   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:11.789052   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:11.789064   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:11.789070   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:11.792248   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.792884   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:12.289577   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:12.289596   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:12.289604   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:12.289609   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:12.292931   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:12.789594   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:12.789615   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:12.789623   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:12.789628   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:12.793282   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.288880   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:13.288900   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:13.288909   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:13.288912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:13.292354   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.789203   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:13.789228   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:13.789240   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:13.789244   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:13.792591   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.793128   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:14.289574   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:14.289596   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:14.289605   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:14.289610   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:14.292856   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:14.789821   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:14.789847   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:14.789858   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:14.789863   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:14.793134   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.289398   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:15.289420   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:15.289428   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:15.289433   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:15.292967   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.789567   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:15.789591   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:15.789602   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:15.789607   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:15.793208   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.793657   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:16.289022   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:16.289046   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:16.289056   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:16.289062   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:16.309335   27131 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1105 18:06:16.789461   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:16.789479   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:16.789488   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:16.789492   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:16.793000   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:17.289308   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:17.289333   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:17.289345   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:17.289354   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:17.292729   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:17.789752   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:17.789779   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:17.789791   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:17.789798   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:17.794196   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:17.794657   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:18.288931   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:18.288964   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:18.288972   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:18.288976   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:18.292090   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:18.789058   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:18.789080   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:18.789086   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:18.789090   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:18.792559   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:19.289923   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:19.289950   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:19.289961   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:19.289966   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:19.293279   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:19.789125   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:19.789153   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:19.789164   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:19.789170   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:19.792732   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:20.289126   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:20.289149   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:20.289157   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:20.289162   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:20.292641   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:20.293309   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:20.789527   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:20.789549   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:20.789557   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:20.789561   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:20.792849   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:21.289833   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:21.289856   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:21.289863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:21.289867   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:21.293665   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:21.789877   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:21.789900   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:21.789908   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:21.789912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:21.793341   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:22.289645   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:22.289664   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:22.289672   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:22.289676   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:22.292986   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:22.293503   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:22.789122   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:22.789148   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:22.789160   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:22.789164   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:22.792397   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:23.289550   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:23.289574   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:23.289584   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:23.289591   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:23.293009   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:23.789081   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:23.789104   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:23.789112   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:23.789116   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:23.792559   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:24.289408   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:24.289432   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:24.289444   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:24.289448   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:24.293655   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:24.294170   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:24.789552   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:24.789579   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:24.789592   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:24.789598   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:24.792779   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:25.289364   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:25.289386   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:25.289393   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:25.289398   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:25.293189   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:25.789622   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:25.789644   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:25.789652   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:25.789655   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:25.792920   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.288919   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:26.288944   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:26.288954   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:26.288961   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:26.292248   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.789720   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:26.789741   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:26.789749   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:26.789753   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:26.793339   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.793840   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:27.289627   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:27.289653   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:27.289664   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:27.289671   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:27.292896   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:27.789396   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:27.789418   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:27.789426   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:27.789430   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:27.793104   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.288926   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.288950   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.288958   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.288962   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.292349   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.292934   27131 node_ready.go:49] node "ha-844661-m03" has status "Ready":"True"
	I1105 18:06:28.292959   27131 node_ready.go:38] duration metric: took 18.504244816s for node "ha-844661-m03" to be "Ready" ...
	I1105 18:06:28.292967   27131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:28.293052   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:28.293062   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.293069   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.293073   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.298865   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:06:28.305101   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.305172   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4bdfz
	I1105 18:06:28.305180   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.305187   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.305191   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.308014   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.308823   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.308838   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.308845   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.308848   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.311202   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.311752   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.311769   27131 pod_ready.go:82] duration metric: took 6.646273ms for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.311778   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.311825   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5g97
	I1105 18:06:28.311833   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.311839   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.311842   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.314162   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.315006   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.315022   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.315032   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.315037   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.317112   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.317771   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.317790   27131 pod_ready.go:82] duration metric: took 6.006174ms for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.317799   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.317847   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661
	I1105 18:06:28.317855   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.317861   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.317869   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.320184   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.320779   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.320794   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.320801   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.320804   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.323022   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.323542   27131 pod_ready.go:93] pod "etcd-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.323560   27131 pod_ready.go:82] duration metric: took 5.754386ms for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.323568   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.323613   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m02
	I1105 18:06:28.323621   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.323627   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.323631   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.325924   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.326482   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:28.326496   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.326503   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.326510   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.328928   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.329392   27131 pod_ready.go:93] pod "etcd-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.329412   27131 pod_ready.go:82] duration metric: took 5.837481ms for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.329426   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.489824   27131 request.go:632] Waited for 160.309715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m03
	I1105 18:06:28.489893   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m03
	I1105 18:06:28.489899   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.489906   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.489914   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.493239   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.689345   27131 request.go:632] Waited for 195.357359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.689416   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.689422   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.689430   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.689436   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.692948   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.693449   27131 pod_ready.go:93] pod "etcd-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.693468   27131 pod_ready.go:82] duration metric: took 364.031884ms for pod "etcd-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.693488   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.889759   27131 request.go:632] Waited for 196.181442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:06:28.889818   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:06:28.889823   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.889830   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.889836   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.893294   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.089232   27131 request.go:632] Waited for 195.272157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:29.089332   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:29.089345   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.089355   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.089363   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.092371   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:29.093062   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.093081   27131 pod_ready.go:82] duration metric: took 399.581249ms for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.093095   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.289039   27131 request.go:632] Waited for 195.870378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:06:29.289108   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:06:29.289114   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.289121   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.289127   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.292782   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.489337   27131 request.go:632] Waited for 195.348089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:29.489423   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:29.489428   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.489439   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.489446   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.492721   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.493290   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.493309   27131 pod_ready.go:82] duration metric: took 400.203815ms for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.493320   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.689371   27131 request.go:632] Waited for 195.98498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m03
	I1105 18:06:29.689467   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m03
	I1105 18:06:29.689479   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.689489   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.689497   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.692955   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.888986   27131 request.go:632] Waited for 195.295088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:29.889053   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:29.889060   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.889071   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.889080   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.892048   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:29.892533   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.892549   27131 pod_ready.go:82] duration metric: took 399.221552ms for pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.892559   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.089669   27131 request.go:632] Waited for 197.039051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:06:30.089731   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:06:30.089736   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.089745   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.089749   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.093164   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.289306   27131 request.go:632] Waited for 195.324188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:30.289372   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:30.289384   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.289397   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.289407   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.292636   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.293206   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:30.293227   27131 pod_ready.go:82] duration metric: took 400.66121ms for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.293238   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.489536   27131 request.go:632] Waited for 196.217205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:06:30.489646   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:06:30.489658   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.489668   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.489675   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.493045   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.688919   27131 request.go:632] Waited for 195.135908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:30.688971   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:30.688976   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.688984   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.688988   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.692203   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.692968   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:30.692987   27131 pod_ready.go:82] duration metric: took 399.741193ms for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.693001   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.889370   27131 request.go:632] Waited for 196.304824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m03
	I1105 18:06:30.889450   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m03
	I1105 18:06:30.889457   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.889465   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.889472   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.892647   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.089803   27131 request.go:632] Waited for 196.376037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.089851   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.089855   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.089863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.089869   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.093035   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.093548   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.093568   27131 pod_ready.go:82] duration metric: took 400.558908ms for pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.093580   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mk9m" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.289696   27131 request.go:632] Waited for 196.055175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mk9m
	I1105 18:06:31.289756   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mk9m
	I1105 18:06:31.289761   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.289768   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.289772   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.293304   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.489478   27131 request.go:632] Waited for 195.351968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.489541   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.489549   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.489556   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.489562   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.492991   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.493563   27131 pod_ready.go:93] pod "kube-proxy-2mk9m" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.493582   27131 pod_ready.go:82] duration metric: took 399.995731ms for pod "kube-proxy-2mk9m" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.493592   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.689978   27131 request.go:632] Waited for 196.300604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:06:31.690038   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:06:31.690043   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.690050   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.690053   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.693380   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.889851   27131 request.go:632] Waited for 195.375559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:31.889905   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:31.889910   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.889917   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.889922   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.893474   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.894113   27131 pod_ready.go:93] pod "kube-proxy-pjpkh" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.894132   27131 pod_ready.go:82] duration metric: took 400.533639ms for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.894142   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.089665   27131 request.go:632] Waited for 195.450073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:06:32.089735   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:06:32.089740   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.089747   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.089751   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.093190   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.289235   27131 request.go:632] Waited for 195.339549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:32.289293   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:32.289310   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.289317   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.289321   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.292485   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.293147   27131 pod_ready.go:93] pod "kube-proxy-zsbfs" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:32.293172   27131 pod_ready.go:82] duration metric: took 399.02399ms for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.293182   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.489243   27131 request.go:632] Waited for 195.995375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:06:32.489308   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:06:32.489316   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.489324   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.489327   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.493003   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.689901   27131 request.go:632] Waited for 196.356448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:32.689953   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:32.689958   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.689966   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.689970   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.693190   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.693742   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:32.693763   27131 pod_ready.go:82] duration metric: took 400.573652ms for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.693777   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.889556   27131 request.go:632] Waited for 195.689425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:06:32.889607   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:06:32.889612   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.889620   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.889624   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.893476   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.089475   27131 request.go:632] Waited for 195.357977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:33.089527   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:33.089532   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.089539   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.089543   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.092888   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.093460   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:33.093481   27131 pod_ready.go:82] duration metric: took 399.697128ms for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.093491   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.289500   27131 request.go:632] Waited for 195.942997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m03
	I1105 18:06:33.289569   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m03
	I1105 18:06:33.289576   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.289585   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.289589   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.293636   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:33.489851   27131 request.go:632] Waited for 195.367744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:33.489908   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:33.489913   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.489920   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.489924   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.493512   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.494235   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:33.494258   27131 pod_ready.go:82] duration metric: took 400.759685ms for pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.494276   27131 pod_ready.go:39] duration metric: took 5.201298893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:33.494295   27131 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:06:33.494356   27131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:06:33.509380   27131 api_server.go:72] duration metric: took 23.955584698s to wait for apiserver process to appear ...
	I1105 18:06:33.509409   27131 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:06:33.509433   27131 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1105 18:06:33.514022   27131 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1105 18:06:33.514097   27131 round_trippers.go:463] GET https://192.168.39.48:8443/version
	I1105 18:06:33.514107   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.514114   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.514119   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.514958   27131 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 18:06:33.515041   27131 api_server.go:141] control plane version: v1.31.2
	I1105 18:06:33.515056   27131 api_server.go:131] duration metric: took 5.640397ms to wait for apiserver health ...
	I1105 18:06:33.515062   27131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:06:33.689459   27131 request.go:632] Waited for 174.322152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:33.689543   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:33.689554   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.689564   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.689570   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.695696   27131 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:06:33.701785   27131 system_pods.go:59] 24 kube-system pods found
	I1105 18:06:33.701817   27131 system_pods.go:61] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:06:33.701822   27131 system_pods.go:61] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:06:33.701826   27131 system_pods.go:61] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:06:33.701829   27131 system_pods.go:61] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:06:33.701832   27131 system_pods.go:61] "etcd-ha-844661-m03" [c8179289-e67f-4a2b-bba3-1387aa107d3e] Running
	I1105 18:06:33.701836   27131 system_pods.go:61] "kindnet-fzrh6" [985ef0b3-91cc-4965-a1f3-a8e468eba2ee] Running
	I1105 18:06:33.701839   27131 system_pods.go:61] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:06:33.701842   27131 system_pods.go:61] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:06:33.701845   27131 system_pods.go:61] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:06:33.701849   27131 system_pods.go:61] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:06:33.701852   27131 system_pods.go:61] "kube-apiserver-ha-844661-m03" [57a94b5d-466e-4d87-ba16-ceba58d65ee0] Running
	I1105 18:06:33.701858   27131 system_pods.go:61] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:06:33.701864   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:06:33.701868   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m03" [dcadcdf5-6004-49a9-800b-f27798ab06db] Running
	I1105 18:06:33.701872   27131 system_pods.go:61] "kube-proxy-2mk9m" [483f248e-9776-4c11-a088-a2cbd152602b] Running
	I1105 18:06:33.701875   27131 system_pods.go:61] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:06:33.701879   27131 system_pods.go:61] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:06:33.701882   27131 system_pods.go:61] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:06:33.701886   27131 system_pods.go:61] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:06:33.701889   27131 system_pods.go:61] "kube-scheduler-ha-844661-m03" [711f353f-ee82-4066-98ff-e3349082bf32] Running
	I1105 18:06:33.701894   27131 system_pods.go:61] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:06:33.701897   27131 system_pods.go:61] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:06:33.701900   27131 system_pods.go:61] "kube-vip-ha-844661-m03" [5ebe3d8b-e1e2-4d10-bf5c-d88148144dd1] Running
	I1105 18:06:33.701903   27131 system_pods.go:61] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:06:33.701909   27131 system_pods.go:74] duration metric: took 186.841773ms to wait for pod list to return data ...
	I1105 18:06:33.701919   27131 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:06:33.889363   27131 request.go:632] Waited for 187.358199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:06:33.889435   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:06:33.889442   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.889452   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.889459   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.893683   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:33.893791   27131 default_sa.go:45] found service account: "default"
	I1105 18:06:33.893804   27131 default_sa.go:55] duration metric: took 191.879725ms for default service account to be created ...
	I1105 18:06:33.893811   27131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:06:34.089215   27131 request.go:632] Waited for 195.345636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:34.089283   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:34.089291   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:34.089303   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:34.089323   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:34.096363   27131 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:06:34.102465   27131 system_pods.go:86] 24 kube-system pods found
	I1105 18:06:34.102491   27131 system_pods.go:89] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:06:34.102496   27131 system_pods.go:89] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:06:34.102501   27131 system_pods.go:89] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:06:34.102505   27131 system_pods.go:89] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:06:34.102508   27131 system_pods.go:89] "etcd-ha-844661-m03" [c8179289-e67f-4a2b-bba3-1387aa107d3e] Running
	I1105 18:06:34.102512   27131 system_pods.go:89] "kindnet-fzrh6" [985ef0b3-91cc-4965-a1f3-a8e468eba2ee] Running
	I1105 18:06:34.102515   27131 system_pods.go:89] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:06:34.102519   27131 system_pods.go:89] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:06:34.102522   27131 system_pods.go:89] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:06:34.102525   27131 system_pods.go:89] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:06:34.102529   27131 system_pods.go:89] "kube-apiserver-ha-844661-m03" [57a94b5d-466e-4d87-ba16-ceba58d65ee0] Running
	I1105 18:06:34.102533   27131 system_pods.go:89] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:06:34.102537   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:06:34.102541   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m03" [dcadcdf5-6004-49a9-800b-f27798ab06db] Running
	I1105 18:06:34.102545   27131 system_pods.go:89] "kube-proxy-2mk9m" [483f248e-9776-4c11-a088-a2cbd152602b] Running
	I1105 18:06:34.102551   27131 system_pods.go:89] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:06:34.102554   27131 system_pods.go:89] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:06:34.102557   27131 system_pods.go:89] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:06:34.102561   27131 system_pods.go:89] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:06:34.102564   27131 system_pods.go:89] "kube-scheduler-ha-844661-m03" [711f353f-ee82-4066-98ff-e3349082bf32] Running
	I1105 18:06:34.102569   27131 system_pods.go:89] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:06:34.102573   27131 system_pods.go:89] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:06:34.102578   27131 system_pods.go:89] "kube-vip-ha-844661-m03" [5ebe3d8b-e1e2-4d10-bf5c-d88148144dd1] Running
	I1105 18:06:34.102581   27131 system_pods.go:89] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:06:34.102586   27131 system_pods.go:126] duration metric: took 208.77013ms to wait for k8s-apps to be running ...
	I1105 18:06:34.102595   27131 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:06:34.102637   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:06:34.118557   27131 system_svc.go:56] duration metric: took 15.951864ms WaitForService to wait for kubelet
	I1105 18:06:34.118583   27131 kubeadm.go:582] duration metric: took 24.564791625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:06:34.118612   27131 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:06:34.288972   27131 request.go:632] Waited for 170.274451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes
	I1105 18:06:34.289022   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes
	I1105 18:06:34.289035   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:34.289055   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:34.289062   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:34.292646   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:34.294249   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294283   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294309   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294316   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294322   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294327   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294335   27131 node_conditions.go:105] duration metric: took 175.714114ms to run NodePressure ...
	I1105 18:06:34.294352   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:06:34.294390   27131 start.go:255] writing updated cluster config ...
	I1105 18:06:34.294711   27131 ssh_runner.go:195] Run: rm -f paused
	I1105 18:06:34.347073   27131 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 18:06:34.348891   27131 out.go:177] * Done! kubectl is now configured to use "ha-844661" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.527657882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830225527624883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09668fde-6b70-49cf-a036-e233cf72f708 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.528411404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=136acef8-3312-4b62-9ed2-255cb5eca17a name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.528462981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=136acef8-3312-4b62-9ed2-255cb5eca17a name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.528730669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=136acef8-3312-4b62-9ed2-255cb5eca17a name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.569323606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2114a75-8025-4c0f-bcde-e5878614fda8 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.569393752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2114a75-8025-4c0f-bcde-e5878614fda8 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.570655221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1822d8d5-95d0-4150-bbb9-270439dcefc7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.571090247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830225571066403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1822d8d5-95d0-4150-bbb9-270439dcefc7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.571838103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=303ea944-8d12-4995-a6d0-b25f248c711a name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.571891617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=303ea944-8d12-4995-a6d0-b25f248c711a name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.572519670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=303ea944-8d12-4995-a6d0-b25f248c711a name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.606821343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab9fdc1c-1f71-490b-8a96-c779fe8f692d name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.606888581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab9fdc1c-1f71-490b-8a96-c779fe8f692d name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.607691012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ecadc99-fe31-4401-b5a2-41233e8851a5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.608132510Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830225608101431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ecadc99-fe31-4401-b5a2-41233e8851a5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.608767495Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2229df8e-ded4-457b-bd5d-96e799c6264f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.608817014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2229df8e-ded4-457b-bd5d-96e799c6264f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.609040780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2229df8e-ded4-457b-bd5d-96e799c6264f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.650775441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60ba8813-2bf4-4c79-ad27-0f5f1545c983 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.650856744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60ba8813-2bf4-4c79-ad27-0f5f1545c983 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.652082748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d107dc8a-f959-444f-a737-02c155f11c83 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.652641787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830225652617044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d107dc8a-f959-444f-a737-02c155f11c83 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.653062853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3af50886-f425-4d85-899a-1da20a9fb8ba name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.653110432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3af50886-f425-4d85-899a-1da20a9fb8ba name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:25 ha-844661 crio[658]: time="2024-11-05 18:10:25.653409371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3af50886-f425-4d85-899a-1da20a9fb8ba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f547082b18e22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   27e18ae242703       busybox-7dff88458-lzhpc
	4504233c88e52       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   7b8c6b865e4b8       coredns-7c65d6cfc9-4bdfz
	2c9fc5d833b41       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   44bedf8a84dbf       coredns-7c65d6cfc9-s5g97
	258fd7ae93626       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   b59a04159a4fb       storage-provisioner
	bf77486744a30       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   565a0867a4a3a       kindnet-vz22j
	1c753c07805a4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   a2589ca7aa1a5       kube-proxy-pjpkh
	9fc3970511492       ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f     6 minutes ago       Running             kube-vip                  0                   229c492a7d447       kube-vip-ha-844661
	f06b75f1a2501       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   da4d3442917c5       etcd-ha-844661
	695ba2636aaa9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   45ce87c5b9a86       kube-scheduler-ha-844661
	d6c4df0798539       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   c3cdeb3fb2bc9       kube-apiserver-ha-844661
	9fc529f9c17c8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   8cfef6eeee31d       kube-controller-manager-ha-844661
	
	
	==> coredns [2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a] <==
	[INFO] 10.244.3.2:48122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001817736s
	[INFO] 10.244.1.2:41485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154354s
	[INFO] 10.244.0.4:48696 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00417262s
	[INFO] 10.244.0.4:39724 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011241203s
	[INFO] 10.244.0.4:33801 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201157s
	[INFO] 10.244.3.2:59342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205557s
	[INFO] 10.244.3.2:38358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000335352s
	[INFO] 10.244.3.2:50220 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290051s
	[INFO] 10.244.1.2:42991 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002076706s
	[INFO] 10.244.1.2:38070 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182659s
	[INFO] 10.244.1.2:38061 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120824s
	[INFO] 10.244.0.4:55480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107684s
	[INFO] 10.244.3.2:54459 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094155s
	[INFO] 10.244.3.2:56770 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159318s
	[INFO] 10.244.1.2:46930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145588s
	[INFO] 10.244.1.2:51686 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000234893s
	[INFO] 10.244.1.2:43604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089852s
	[INFO] 10.244.0.4:59908 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00031712s
	[INFO] 10.244.3.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016445s
	[INFO] 10.244.3.2:35219 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306046s
	[INFO] 10.244.3.2:45286 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00016761s
	[INFO] 10.244.1.2:48376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000282486s
	[INFO] 10.244.1.2:44477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097938s
	[INFO] 10.244.1.2:51521 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175252s
	[INFO] 10.244.1.2:42468 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076611s
	
	
	==> coredns [4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8] <==
	[INFO] 10.244.0.4:38561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176278s
	[INFO] 10.244.0.4:47328 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000239279s
	[INFO] 10.244.0.4:37188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002005s
	[INFO] 10.244.0.4:40443 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116158s
	[INFO] 10.244.0.4:39770 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000216794s
	[INFO] 10.244.3.2:58499 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947267s
	[INFO] 10.244.3.2:50696 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001435907s
	[INFO] 10.244.3.2:53598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101366s
	[INFO] 10.244.3.2:40278 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021319s
	[INFO] 10.244.3.2:35533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073855s
	[INFO] 10.244.1.2:57627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215883s
	[INFO] 10.244.1.2:58558 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015092s
	[INFO] 10.244.1.2:44310 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409552s
	[INFO] 10.244.1.2:44445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145932s
	[INFO] 10.244.1.2:53561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124269s
	[INFO] 10.244.0.4:42872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279983s
	[INFO] 10.244.0.4:56987 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127988s
	[INFO] 10.244.0.4:36230 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209676s
	[INFO] 10.244.3.2:59508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020584s
	[INFO] 10.244.3.2:54542 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160368s
	[INFO] 10.244.1.2:52317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136132s
	[INFO] 10.244.0.4:56988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179513s
	[INFO] 10.244.0.4:39632 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000244979s
	[INFO] 10.244.0.4:60960 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110854s
	[INFO] 10.244.3.2:58476 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000304046s
	
	
	==> describe nodes <==
	Name:               ha-844661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T18_03_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:03:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-844661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee44951a983a4e549dbb04cb8a2493c9
	  System UUID:                ee44951a-983a-4e54-9dbb-04cb8a2493c9
	  Boot ID:                    4c65764c-54aa-465a-bc8a-8a5365b789a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lzhpc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 coredns-7c65d6cfc9-4bdfz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-s5g97             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-844661                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m27s
	  kube-system                 kindnet-vz22j                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m23s
	  kube-system                 kube-apiserver-ha-844661             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-844661    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-pjpkh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-scheduler-ha-844661             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-vip-ha-844661                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m22s  kube-proxy       
	  Normal  Starting                 6m27s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m27s  kubelet          Node ha-844661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s  kubelet          Node ha-844661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m27s  kubelet          Node ha-844661 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	  Normal  NodeReady                6m6s   kubelet          Node ha-844661 status is now: NodeReady
	  Normal  RegisteredNode           5m23s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	  Normal  RegisteredNode           4m11s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	
	
	Name:               ha-844661-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_04_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    ha-844661-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75eddb8895b44c028e3869c19333df27
	  System UUID:                75eddb88-95b4-4c02-8e38-69c19333df27
	  Boot ID:                    703a3f97-42af-45ac-b300-e4714fc82ae4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vkchm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-844661-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m30s
	  kube-system                 kindnet-q898d                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m32s
	  kube-system                 kube-apiserver-ha-844661-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-controller-manager-ha-844661-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-proxy-zsbfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-scheduler-ha-844661-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-vip-ha-844661-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m32s                  cidrAllocator    Node ha-844661-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m32s (x8 over 5m32s)  kubelet          Node ha-844661-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m32s (x8 over 5m32s)  kubelet          Node ha-844661-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m32s (x7 over 5m32s)  kubelet          Node ha-844661-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  NodeNotReady             117s                   node-controller  Node ha-844661-m02 status is now: NodeNotReady
	
	
	Name:               ha-844661-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_06_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:06:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    ha-844661-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eaab072d40e24724bda026ac82fdd308
	  System UUID:                eaab072d-40e2-4724-bda0-26ac82fdd308
	  Boot ID:                    db511fc0-c5d5-4348-8360-c6fc1b44808f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mwvv2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-844661-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kindnet-fzrh6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m20s
	  kube-system                 kube-apiserver-ha-844661-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-ha-844661-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-2mk9m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-ha-844661-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-vip-ha-844661-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m20s                  cidrAllocator    Node ha-844661-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m20s)  kubelet          Node ha-844661-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m20s)  kubelet          Node ha-844661-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m20s)  kubelet          Node ha-844661-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	
	
	Name:               ha-844661-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_07_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-844661-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9adceb878ab74645bb56707a0ab9854e
	  System UUID:                9adceb87-8ab7-4645-bb56-707a0ab9854e
	  Boot ID:                    0b1794d4-8e9f-4a02-ba93-5010c0d8fbf7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7tcjz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m13s
	  kube-system                 kube-proxy-8bw6z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m7s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     3m13s                  cidrAllocator    Node ha-844661-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     3m13s                  cidrAllocator    Node ha-844661-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m13s)  kubelet          Node ha-844661-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m13s)  kubelet          Node ha-844661-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m13s)  kubelet          Node ha-844661-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-844661-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 5 18:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051370] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036705] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.826003] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.830792] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.518259] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.512732] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.062769] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057746] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.181267] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.115768] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.273995] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.824232] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.167137] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.060834] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.275907] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.079756] kauditd_printk_skb: 79 callbacks suppressed
	[Nov 5 18:04] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.402917] kauditd_printk_skb: 32 callbacks suppressed
	[Nov 5 18:05] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc] <==
	{"level":"warn","ts":"2024-11-05T18:10:25.917493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:25.918409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:25.926430Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:25.935287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:25.942083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:25.949969Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:25.955865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:25.959566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.026933Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.027319Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.034356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.041911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.046452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.049839Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.060516Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.067587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.073808Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.077530Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.080548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.084940Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.091320Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.098028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.126508Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.161980Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:26.163555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:10:26 up 7 min,  0 users,  load average: 0.29, 0.41, 0.21
	Linux ha-844661 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf] <==
	I1105 18:09:48.981804       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:09:58.979695       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:09:58.979736       1 main.go:301] handling current node
	I1105 18:09:58.979751       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:09:58.979757       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:09:58.979941       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:09:58.979961       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:09:58.980047       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:09:58.980065       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	I1105 18:10:08.975320       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:10:08.975425       1 main.go:301] handling current node
	I1105 18:10:08.975448       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:10:08.975457       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:10:08.975728       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:10:08.975758       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:10:08.975910       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:10:08.975933       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	I1105 18:10:18.980134       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:10:18.980289       1 main.go:301] handling current node
	I1105 18:10:18.980325       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:10:18.980334       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:10:18.980658       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:10:18.980687       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:10:18.980836       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:10:18.980863       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f] <==
	W1105 18:03:56.787950       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.48]
	I1105 18:03:56.789794       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:03:56.795759       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:03:56.988233       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1105 18:03:58.574343       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1105 18:03:58.589042       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1105 18:03:58.611994       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1105 18:04:02.140726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1105 18:04:02.242563       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1105 18:06:39.847316       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39688: use of closed network connection
	E1105 18:06:40.021738       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39706: use of closed network connection
	E1105 18:06:40.204127       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39716: use of closed network connection
	E1105 18:06:40.398615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39728: use of closed network connection
	E1105 18:06:40.573865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39736: use of closed network connection
	E1105 18:06:40.752398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39760: use of closed network connection
	E1105 18:06:40.936783       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39766: use of closed network connection
	E1105 18:06:41.111519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39780: use of closed network connection
	E1105 18:06:41.286054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39802: use of closed network connection
	E1105 18:06:41.573950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39826: use of closed network connection
	E1105 18:06:41.738524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39836: use of closed network connection
	E1105 18:06:41.904845       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39854: use of closed network connection
	E1105 18:06:42.073866       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39862: use of closed network connection
	E1105 18:06:42.246567       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39868: use of closed network connection
	E1105 18:06:42.411961       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39894: use of closed network connection
	W1105 18:08:06.801135       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.48 192.168.39.52]
	
	
	==> kube-controller-manager [9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c] <==
	E1105 18:07:13.653435       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-844661-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-844661-m04"
	E1105 18:07:13.653555       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-844661-m04': failed to patch node CIDR: Node \"ha-844661-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1105 18:07:13.653638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:13.659637       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:13.797662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:14.149565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:14.559123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:16.780529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:16.780718       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-844661-m04"
	I1105 18:07:16.994375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:17.944364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:18.017747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:23.969145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:33.222978       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844661-m04"
	I1105 18:07:33.223667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:33.239449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:34.533989       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:44.277626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:08:29.557990       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844661-m04"
	I1105 18:08:29.558983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:29.585475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:29.697679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.853166ms"
	I1105 18:08:29.699962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.926µs"
	I1105 18:08:31.887524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:34.788426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	
	
	==> kube-proxy [1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:04:03.571824       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:04:03.590655       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E1105 18:04:03.590765       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:04:03.621086       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:04:03.621144       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:04:03.621208       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:04:03.623505       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:04:03.623772       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:04:03.623783       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:04:03.625873       1 config.go:199] "Starting service config controller"
	I1105 18:04:03.625922       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:04:03.625956       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:04:03.625972       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:04:03.628076       1 config.go:328] "Starting node config controller"
	I1105 18:04:03.628108       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:04:03.726043       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:04:03.726043       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:04:03.728252       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab] <==
	E1105 18:03:56.072125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.276682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 18:03:56.276737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.329770       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 18:03:56.329820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.398642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 18:03:56.398687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1105 18:03:57.639067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 18:06:35.211549       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9e352dc6-ed87-4112-85c5-a76c00a8912f" pod="default/busybox-7dff88458-vkchm" assumedNode="ha-844661-m02" currentNode="ha-844661-m03"
	E1105 18:06:35.223911       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vkchm\": pod busybox-7dff88458-vkchm is already assigned to node \"ha-844661-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vkchm" node="ha-844661-m03"
	E1105 18:06:35.226313       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9e352dc6-ed87-4112-85c5-a76c00a8912f(default/busybox-7dff88458-vkchm) was assumed on ha-844661-m03 but assigned to ha-844661-m02" pod="default/busybox-7dff88458-vkchm"
	E1105 18:06:35.226429       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vkchm\": pod busybox-7dff88458-vkchm is already assigned to node \"ha-844661-m02\"" pod="default/busybox-7dff88458-vkchm"
	I1105 18:06:35.226528       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vkchm" node="ha-844661-m02"
	E1105 18:06:35.274759       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lzhpc\": pod busybox-7dff88458-lzhpc is already assigned to node \"ha-844661\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lzhpc" node="ha-844661"
	E1105 18:06:35.275967       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8687b103-4a1a-4529-9efd-46405325fb04(default/busybox-7dff88458-lzhpc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lzhpc"
	E1105 18:06:35.276226       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lzhpc\": pod busybox-7dff88458-lzhpc is already assigned to node \"ha-844661\"" pod="default/busybox-7dff88458-lzhpc"
	I1105 18:06:35.276363       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lzhpc" node="ha-844661"
	E1105 18:07:13.665747       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tfzng\": pod kube-proxy-tfzng is already assigned to node \"ha-844661-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tfzng" node="ha-844661-m04"
	E1105 18:07:13.665825       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f52b30f-7446-45ac-bb36-73398ffbfbc2(kube-system/kube-proxy-tfzng) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tfzng"
	E1105 18:07:13.665842       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tfzng\": pod kube-proxy-tfzng is already assigned to node \"ha-844661-m04\"" pod="kube-system/kube-proxy-tfzng"
	I1105 18:07:13.665872       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tfzng" node="ha-844661-m04"
	E1105 18:07:13.666212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vjq6v\": pod kindnet-vjq6v is already assigned to node \"ha-844661-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vjq6v" node="ha-844661-m04"
	E1105 18:07:13.666376       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d9f2bfec-eb1f-4373-bf3a-414ed6c8a630(kube-system/kindnet-vjq6v) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vjq6v"
	E1105 18:07:13.666420       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vjq6v\": pod kindnet-vjq6v is already assigned to node \"ha-844661-m04\"" pod="kube-system/kindnet-vjq6v"
	I1105 18:07:13.666453       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vjq6v" node="ha-844661-m04"
	
	
	==> kubelet <==
	Nov 05 18:08:58 ha-844661 kubelet[1296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:08:58 ha-844661 kubelet[1296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:08:58 ha-844661 kubelet[1296]: E1105 18:08:58.595270    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830138594734384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:58 ha-844661 kubelet[1296]: E1105 18:08:58.595295    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830138594734384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:08 ha-844661 kubelet[1296]: E1105 18:09:08.597057    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830148596755320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:08 ha-844661 kubelet[1296]: E1105 18:09:08.597097    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830148596755320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:18 ha-844661 kubelet[1296]: E1105 18:09:18.599471    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830158599122023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:18 ha-844661 kubelet[1296]: E1105 18:09:18.599506    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830158599122023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:28 ha-844661 kubelet[1296]: E1105 18:09:28.601448    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830168600902243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:28 ha-844661 kubelet[1296]: E1105 18:09:28.601554    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830168600902243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:38 ha-844661 kubelet[1296]: E1105 18:09:38.606338    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830178605104359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:38 ha-844661 kubelet[1296]: E1105 18:09:38.606359    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830178605104359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:48 ha-844661 kubelet[1296]: E1105 18:09:48.608274    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830188607885225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:48 ha-844661 kubelet[1296]: E1105 18:09:48.608666    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830188607885225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.519242    1296 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:09:58 ha-844661 kubelet[1296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.611279    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830198610818845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.611302    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830198610818845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:08 ha-844661 kubelet[1296]: E1105 18:10:08.613551    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830208612853413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:08 ha-844661 kubelet[1296]: E1105 18:10:08.613956    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830208612853413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:18 ha-844661 kubelet[1296]: E1105 18:10:18.616403    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830218615829286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:18 ha-844661 kubelet[1296]: E1105 18:10:18.616436    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830218615829286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844661 -n ha-844661
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1105 18:10:29.995149   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.146754489s)
ha_test.go:309: expected profile "ha-844661" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844661\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-844661\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-844661\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.48\",\"Port\":8443,\"Kubernet
esVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.38\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.52\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.89\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":f
alse,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"Mo
untIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844661 -n ha-844661
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 logs -n 25: (1.381471227s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m03_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m04 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp testdata/cp-test.txt                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m04_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03:/home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m03 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-844661 node stop m02 -v=7                                                     | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-844661 node start m02 -v=7                                                    | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:03:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:03:20.652608   27131 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:03:20.652749   27131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:03:20.652760   27131 out.go:358] Setting ErrFile to fd 2...
	I1105 18:03:20.652767   27131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:03:20.652948   27131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:03:20.653500   27131 out.go:352] Setting JSON to false
	I1105 18:03:20.654349   27131 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2743,"bootTime":1730827058,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:03:20.654437   27131 start.go:139] virtualization: kvm guest
	I1105 18:03:20.656534   27131 out.go:177] * [ha-844661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:03:20.657972   27131 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:03:20.658005   27131 notify.go:220] Checking for updates...
	I1105 18:03:20.660463   27131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:03:20.661864   27131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:03:20.663111   27131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:20.664367   27131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:03:20.665603   27131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:03:20.666934   27131 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:03:20.701089   27131 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 18:03:20.702358   27131 start.go:297] selected driver: kvm2
	I1105 18:03:20.702375   27131 start.go:901] validating driver "kvm2" against <nil>
	I1105 18:03:20.702385   27131 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:03:20.703116   27131 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:03:20.703189   27131 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:03:20.718290   27131 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:03:20.718330   27131 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 18:03:20.718556   27131 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:03:20.718584   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:20.718622   27131 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1105 18:03:20.718632   27131 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 18:03:20.718676   27131 start.go:340] cluster config:
	{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1105 18:03:20.718795   27131 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:03:20.720599   27131 out.go:177] * Starting "ha-844661" primary control-plane node in "ha-844661" cluster
	I1105 18:03:20.721815   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:03:20.721849   27131 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:03:20.721872   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:03:20.721982   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:03:20.721996   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:03:20.722409   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:03:20.722435   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json: {Name:mkaefcdd76905e10868a2bf21132faf3026da59d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:20.722574   27131 start.go:360] acquireMachinesLock for ha-844661: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:03:20.722613   27131 start.go:364] duration metric: took 21.652µs to acquireMachinesLock for "ha-844661"
	I1105 18:03:20.722627   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:03:20.722690   27131 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 18:03:20.724172   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:03:20.724279   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:03:20.724320   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:03:20.738289   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I1105 18:03:20.738756   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:03:20.739283   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:03:20.739302   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:03:20.739702   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:03:20.739881   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:20.740007   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:20.740175   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:03:20.740205   27131 client.go:168] LocalClient.Create starting
	I1105 18:03:20.740238   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:03:20.740272   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:03:20.740288   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:03:20.740341   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:03:20.740359   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:03:20.740374   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:03:20.740388   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:03:20.740400   27131 main.go:141] libmachine: (ha-844661) Calling .PreCreateCheck
	I1105 18:03:20.740713   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:20.741068   27131 main.go:141] libmachine: Creating machine...
	I1105 18:03:20.741080   27131 main.go:141] libmachine: (ha-844661) Calling .Create
	I1105 18:03:20.741210   27131 main.go:141] libmachine: (ha-844661) Creating KVM machine...
	I1105 18:03:20.742313   27131 main.go:141] libmachine: (ha-844661) DBG | found existing default KVM network
	I1105 18:03:20.742933   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:20.742806   27154 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1105 18:03:20.742963   27131 main.go:141] libmachine: (ha-844661) DBG | created network xml: 
	I1105 18:03:20.742994   27131 main.go:141] libmachine: (ha-844661) DBG | <network>
	I1105 18:03:20.743008   27131 main.go:141] libmachine: (ha-844661) DBG |   <name>mk-ha-844661</name>
	I1105 18:03:20.743015   27131 main.go:141] libmachine: (ha-844661) DBG |   <dns enable='no'/>
	I1105 18:03:20.743024   27131 main.go:141] libmachine: (ha-844661) DBG |   
	I1105 18:03:20.743029   27131 main.go:141] libmachine: (ha-844661) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1105 18:03:20.743036   27131 main.go:141] libmachine: (ha-844661) DBG |     <dhcp>
	I1105 18:03:20.743041   27131 main.go:141] libmachine: (ha-844661) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1105 18:03:20.743049   27131 main.go:141] libmachine: (ha-844661) DBG |     </dhcp>
	I1105 18:03:20.743053   27131 main.go:141] libmachine: (ha-844661) DBG |   </ip>
	I1105 18:03:20.743060   27131 main.go:141] libmachine: (ha-844661) DBG |   
	I1105 18:03:20.743066   27131 main.go:141] libmachine: (ha-844661) DBG | </network>
	I1105 18:03:20.743074   27131 main.go:141] libmachine: (ha-844661) DBG | 
	I1105 18:03:20.748364   27131 main.go:141] libmachine: (ha-844661) DBG | trying to create private KVM network mk-ha-844661 192.168.39.0/24...
	I1105 18:03:20.811114   27131 main.go:141] libmachine: (ha-844661) DBG | private KVM network mk-ha-844661 192.168.39.0/24 created
	I1105 18:03:20.811141   27131 main.go:141] libmachine: (ha-844661) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 ...
	I1105 18:03:20.811159   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:20.811087   27154 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:20.811177   27131 main.go:141] libmachine: (ha-844661) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:03:20.811237   27131 main.go:141] libmachine: (ha-844661) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:03:21.057798   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.057650   27154 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa...
	I1105 18:03:21.226724   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.226590   27154 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/ha-844661.rawdisk...
	I1105 18:03:21.226750   27131 main.go:141] libmachine: (ha-844661) DBG | Writing magic tar header
	I1105 18:03:21.226760   27131 main.go:141] libmachine: (ha-844661) DBG | Writing SSH key tar header
	I1105 18:03:21.226768   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:21.226707   27154 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 ...
	I1105 18:03:21.226781   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661
	I1105 18:03:21.226859   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661 (perms=drwx------)
	I1105 18:03:21.226880   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:03:21.226887   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:03:21.226897   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:03:21.226904   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:03:21.226909   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:03:21.226916   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:03:21.226920   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:03:21.226927   27131 main.go:141] libmachine: (ha-844661) DBG | Checking permissions on dir: /home
	I1105 18:03:21.226932   27131 main.go:141] libmachine: (ha-844661) DBG | Skipping /home - not owner
	I1105 18:03:21.226941   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:03:21.226950   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:03:21.226957   27131 main.go:141] libmachine: (ha-844661) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:03:21.226962   27131 main.go:141] libmachine: (ha-844661) Creating domain...
	I1105 18:03:21.228177   27131 main.go:141] libmachine: (ha-844661) define libvirt domain using xml: 
	I1105 18:03:21.228198   27131 main.go:141] libmachine: (ha-844661) <domain type='kvm'>
	I1105 18:03:21.228204   27131 main.go:141] libmachine: (ha-844661)   <name>ha-844661</name>
	I1105 18:03:21.228209   27131 main.go:141] libmachine: (ha-844661)   <memory unit='MiB'>2200</memory>
	I1105 18:03:21.228214   27131 main.go:141] libmachine: (ha-844661)   <vcpu>2</vcpu>
	I1105 18:03:21.228218   27131 main.go:141] libmachine: (ha-844661)   <features>
	I1105 18:03:21.228223   27131 main.go:141] libmachine: (ha-844661)     <acpi/>
	I1105 18:03:21.228228   27131 main.go:141] libmachine: (ha-844661)     <apic/>
	I1105 18:03:21.228233   27131 main.go:141] libmachine: (ha-844661)     <pae/>
	I1105 18:03:21.228241   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228249   27131 main.go:141] libmachine: (ha-844661)   </features>
	I1105 18:03:21.228254   27131 main.go:141] libmachine: (ha-844661)   <cpu mode='host-passthrough'>
	I1105 18:03:21.228261   27131 main.go:141] libmachine: (ha-844661)   
	I1105 18:03:21.228268   27131 main.go:141] libmachine: (ha-844661)   </cpu>
	I1105 18:03:21.228298   27131 main.go:141] libmachine: (ha-844661)   <os>
	I1105 18:03:21.228318   27131 main.go:141] libmachine: (ha-844661)     <type>hvm</type>
	I1105 18:03:21.228325   27131 main.go:141] libmachine: (ha-844661)     <boot dev='cdrom'/>
	I1105 18:03:21.228329   27131 main.go:141] libmachine: (ha-844661)     <boot dev='hd'/>
	I1105 18:03:21.228355   27131 main.go:141] libmachine: (ha-844661)     <bootmenu enable='no'/>
	I1105 18:03:21.228375   27131 main.go:141] libmachine: (ha-844661)   </os>
	I1105 18:03:21.228385   27131 main.go:141] libmachine: (ha-844661)   <devices>
	I1105 18:03:21.228403   27131 main.go:141] libmachine: (ha-844661)     <disk type='file' device='cdrom'>
	I1105 18:03:21.228418   27131 main.go:141] libmachine: (ha-844661)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/boot2docker.iso'/>
	I1105 18:03:21.228429   27131 main.go:141] libmachine: (ha-844661)       <target dev='hdc' bus='scsi'/>
	I1105 18:03:21.228437   27131 main.go:141] libmachine: (ha-844661)       <readonly/>
	I1105 18:03:21.228450   27131 main.go:141] libmachine: (ha-844661)     </disk>
	I1105 18:03:21.228462   27131 main.go:141] libmachine: (ha-844661)     <disk type='file' device='disk'>
	I1105 18:03:21.228474   27131 main.go:141] libmachine: (ha-844661)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:03:21.228488   27131 main.go:141] libmachine: (ha-844661)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/ha-844661.rawdisk'/>
	I1105 18:03:21.228497   27131 main.go:141] libmachine: (ha-844661)       <target dev='hda' bus='virtio'/>
	I1105 18:03:21.228502   27131 main.go:141] libmachine: (ha-844661)     </disk>
	I1105 18:03:21.228511   27131 main.go:141] libmachine: (ha-844661)     <interface type='network'>
	I1105 18:03:21.228519   27131 main.go:141] libmachine: (ha-844661)       <source network='mk-ha-844661'/>
	I1105 18:03:21.228532   27131 main.go:141] libmachine: (ha-844661)       <model type='virtio'/>
	I1105 18:03:21.228539   27131 main.go:141] libmachine: (ha-844661)     </interface>
	I1105 18:03:21.228551   27131 main.go:141] libmachine: (ha-844661)     <interface type='network'>
	I1105 18:03:21.228560   27131 main.go:141] libmachine: (ha-844661)       <source network='default'/>
	I1105 18:03:21.228570   27131 main.go:141] libmachine: (ha-844661)       <model type='virtio'/>
	I1105 18:03:21.228579   27131 main.go:141] libmachine: (ha-844661)     </interface>
	I1105 18:03:21.228587   27131 main.go:141] libmachine: (ha-844661)     <serial type='pty'>
	I1105 18:03:21.228599   27131 main.go:141] libmachine: (ha-844661)       <target port='0'/>
	I1105 18:03:21.228607   27131 main.go:141] libmachine: (ha-844661)     </serial>
	I1105 18:03:21.228613   27131 main.go:141] libmachine: (ha-844661)     <console type='pty'>
	I1105 18:03:21.228629   27131 main.go:141] libmachine: (ha-844661)       <target type='serial' port='0'/>
	I1105 18:03:21.228642   27131 main.go:141] libmachine: (ha-844661)     </console>
	I1105 18:03:21.228653   27131 main.go:141] libmachine: (ha-844661)     <rng model='virtio'>
	I1105 18:03:21.228670   27131 main.go:141] libmachine: (ha-844661)       <backend model='random'>/dev/random</backend>
	I1105 18:03:21.228679   27131 main.go:141] libmachine: (ha-844661)     </rng>
	I1105 18:03:21.228687   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228694   27131 main.go:141] libmachine: (ha-844661)     
	I1105 18:03:21.228699   27131 main.go:141] libmachine: (ha-844661)   </devices>
	I1105 18:03:21.228707   27131 main.go:141] libmachine: (ha-844661) </domain>
	I1105 18:03:21.228717   27131 main.go:141] libmachine: (ha-844661) 
	I1105 18:03:21.232718   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:b2:92:26 in network default
	I1105 18:03:21.233193   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:21.233215   27131 main.go:141] libmachine: (ha-844661) Ensuring networks are active...
	I1105 18:03:21.233765   27131 main.go:141] libmachine: (ha-844661) Ensuring network default is active
	I1105 18:03:21.234017   27131 main.go:141] libmachine: (ha-844661) Ensuring network mk-ha-844661 is active
	I1105 18:03:21.234455   27131 main.go:141] libmachine: (ha-844661) Getting domain xml...
	I1105 18:03:21.235089   27131 main.go:141] libmachine: (ha-844661) Creating domain...
	I1105 18:03:22.412574   27131 main.go:141] libmachine: (ha-844661) Waiting to get IP...
	I1105 18:03:22.413266   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:22.413608   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:22.413630   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:22.413577   27154 retry.go:31] will retry after 279.954438ms: waiting for machine to come up
	I1105 18:03:22.695059   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:22.695483   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:22.695511   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:22.695451   27154 retry.go:31] will retry after 304.898477ms: waiting for machine to come up
	I1105 18:03:23.001972   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.002322   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.002343   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.002303   27154 retry.go:31] will retry after 443.493793ms: waiting for machine to come up
	I1105 18:03:23.446683   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.447042   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.447069   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.446999   27154 retry.go:31] will retry after 509.391538ms: waiting for machine to come up
	I1105 18:03:23.957539   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:23.957900   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:23.957927   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:23.957847   27154 retry.go:31] will retry after 602.880889ms: waiting for machine to come up
	I1105 18:03:24.562659   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:24.563119   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:24.563144   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:24.563076   27154 retry.go:31] will retry after 741.734368ms: waiting for machine to come up
	I1105 18:03:25.306116   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:25.306633   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:25.306663   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:25.306587   27154 retry.go:31] will retry after 1.015957471s: waiting for machine to come up
	I1105 18:03:26.324342   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:26.324731   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:26.324755   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:26.324683   27154 retry.go:31] will retry after 1.378698886s: waiting for machine to come up
	I1105 18:03:27.705172   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:27.705551   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:27.705575   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:27.705506   27154 retry.go:31] will retry after 1.576136067s: waiting for machine to come up
	I1105 18:03:29.283960   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:29.284380   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:29.284417   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:29.284337   27154 retry.go:31] will retry after 2.253581174s: waiting for machine to come up
	I1105 18:03:31.539436   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:31.539830   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:31.539860   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:31.539773   27154 retry.go:31] will retry after 1.761371484s: waiting for machine to come up
	I1105 18:03:33.303719   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:33.304166   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:33.304190   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:33.304128   27154 retry.go:31] will retry after 2.85080226s: waiting for machine to come up
	I1105 18:03:36.156486   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:36.156898   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find current IP address of domain ha-844661 in network mk-ha-844661
	I1105 18:03:36.156920   27131 main.go:141] libmachine: (ha-844661) DBG | I1105 18:03:36.156851   27154 retry.go:31] will retry after 4.320693691s: waiting for machine to come up
	I1105 18:03:40.482276   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.482645   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has current primary IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.482666   27131 main.go:141] libmachine: (ha-844661) Found IP for machine: 192.168.39.48
	I1105 18:03:40.482731   27131 main.go:141] libmachine: (ha-844661) Reserving static IP address...
	I1105 18:03:40.483186   27131 main.go:141] libmachine: (ha-844661) DBG | unable to find host DHCP lease matching {name: "ha-844661", mac: "52:54:00:ba:57:dd", ip: "192.168.39.48"} in network mk-ha-844661
	I1105 18:03:40.553039   27131 main.go:141] libmachine: (ha-844661) DBG | Getting to WaitForSSH function...
	I1105 18:03:40.553065   27131 main.go:141] libmachine: (ha-844661) Reserved static IP address: 192.168.39.48
	I1105 18:03:40.553074   27131 main.go:141] libmachine: (ha-844661) Waiting for SSH to be available...
	I1105 18:03:40.555541   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.555889   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.555921   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.556076   27131 main.go:141] libmachine: (ha-844661) DBG | Using SSH client type: external
	I1105 18:03:40.556099   27131 main.go:141] libmachine: (ha-844661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa (-rw-------)
	I1105 18:03:40.556130   27131 main.go:141] libmachine: (ha-844661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:03:40.556164   27131 main.go:141] libmachine: (ha-844661) DBG | About to run SSH command:
	I1105 18:03:40.556196   27131 main.go:141] libmachine: (ha-844661) DBG | exit 0
	I1105 18:03:40.678881   27131 main.go:141] libmachine: (ha-844661) DBG | SSH cmd err, output: <nil>: 
	I1105 18:03:40.679168   27131 main.go:141] libmachine: (ha-844661) KVM machine creation complete!
	I1105 18:03:40.679431   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:40.680021   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:40.680197   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:40.680362   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:03:40.680377   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:03:40.681549   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:03:40.681565   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:03:40.681581   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:03:40.681589   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.683878   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.684197   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.684222   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.684354   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.684522   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.684666   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.684789   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.684936   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.685164   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.685176   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:03:40.782106   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:03:40.782126   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:03:40.782134   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.785142   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.785540   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.785569   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.785664   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.785868   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.786031   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.786159   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.786354   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.786515   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.786526   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:03:40.883619   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:03:40.883676   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:03:40.883682   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:03:40.883690   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:40.883923   27131 buildroot.go:166] provisioning hostname "ha-844661"
	I1105 18:03:40.883949   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:40.884120   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:40.886507   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.886833   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:40.886857   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:40.886980   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:40.887151   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.887291   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:40.887396   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:40.887549   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:40.887741   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:40.887756   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661 && echo "ha-844661" | sudo tee /etc/hostname
	I1105 18:03:41.000392   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661
	
	I1105 18:03:41.000420   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.003294   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.003567   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.003608   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.003744   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.003933   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.004103   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.004242   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.004353   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.004531   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.004545   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:03:41.111348   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:03:41.111383   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:03:41.111432   27131 buildroot.go:174] setting up certificates
	I1105 18:03:41.111449   27131 provision.go:84] configureAuth start
	I1105 18:03:41.111460   27131 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:03:41.111736   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.114450   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.114812   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.114841   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.114944   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.117124   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.117436   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.117462   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.117573   27131 provision.go:143] copyHostCerts
	I1105 18:03:41.117613   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:03:41.117655   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:03:41.117671   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:03:41.117771   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:03:41.117875   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:03:41.117903   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:03:41.117913   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:03:41.117953   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:03:41.118004   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:03:41.118021   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:03:41.118027   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:03:41.118050   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:03:41.118095   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661 san=[127.0.0.1 192.168.39.48 ha-844661 localhost minikube]
	I1105 18:03:41.208702   27131 provision.go:177] copyRemoteCerts
	I1105 18:03:41.208760   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:03:41.208783   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.211467   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.211827   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.211850   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.212052   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.212204   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.212341   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.212443   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.296812   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:03:41.296897   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:03:41.319712   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:03:41.319772   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:03:41.342415   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:03:41.342483   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1105 18:03:41.365050   27131 provision.go:87] duration metric: took 253.585291ms to configureAuth
	I1105 18:03:41.365082   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:03:41.365296   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:03:41.365378   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.368515   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.368840   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.368869   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.369025   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.369189   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.369363   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.369489   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.369646   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.369808   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.369821   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:03:41.576635   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:03:41.576666   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:03:41.576676   27131 main.go:141] libmachine: (ha-844661) Calling .GetURL
	I1105 18:03:41.577929   27131 main.go:141] libmachine: (ha-844661) DBG | Using libvirt version 6000000
	I1105 18:03:41.580297   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.580615   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.580654   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.580760   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:03:41.580772   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:03:41.580778   27131 client.go:171] duration metric: took 20.840565211s to LocalClient.Create
	I1105 18:03:41.580795   27131 start.go:167] duration metric: took 20.84062429s to libmachine.API.Create "ha-844661"
	I1105 18:03:41.580805   27131 start.go:293] postStartSetup for "ha-844661" (driver="kvm2")
	I1105 18:03:41.580814   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:03:41.580829   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.581046   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:03:41.581068   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.583124   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.583501   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.583522   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.583601   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.583779   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.583943   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.584110   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.661161   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:03:41.665033   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:03:41.665062   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:03:41.665127   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:03:41.665231   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:03:41.665252   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:03:41.665373   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:03:41.674466   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:03:41.696494   27131 start.go:296] duration metric: took 115.67878ms for postStartSetup
	I1105 18:03:41.696542   27131 main.go:141] libmachine: (ha-844661) Calling .GetConfigRaw
	I1105 18:03:41.697138   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.699655   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.699984   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.700009   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.700292   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:03:41.700505   27131 start.go:128] duration metric: took 20.977803727s to createHost
	I1105 18:03:41.700531   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.702386   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.702601   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.702627   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.702711   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.702863   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.703005   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.703106   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.703251   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:03:41.703451   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:03:41.703464   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:03:41.803411   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829821.777547713
	
	I1105 18:03:41.803432   27131 fix.go:216] guest clock: 1730829821.777547713
	I1105 18:03:41.803441   27131 fix.go:229] Guest: 2024-11-05 18:03:41.777547713 +0000 UTC Remote: 2024-11-05 18:03:41.700519186 +0000 UTC m=+21.085212205 (delta=77.028527ms)
	I1105 18:03:41.803466   27131 fix.go:200] guest clock delta is within tolerance: 77.028527ms
	I1105 18:03:41.803472   27131 start.go:83] releasing machines lock for "ha-844661", held for 21.080851922s
	I1105 18:03:41.803504   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.803818   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:41.806212   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.806544   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.806574   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.806731   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807182   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807323   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:03:41.807421   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:03:41.807458   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.807478   27131 ssh_runner.go:195] Run: cat /version.json
	I1105 18:03:41.807503   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:03:41.809937   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810070   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810265   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.810291   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810383   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.810476   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:41.810506   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:41.810517   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.810650   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.810655   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:03:41.810815   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:03:41.810809   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.810922   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:03:41.811058   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:03:41.883551   27131 ssh_runner.go:195] Run: systemctl --version
	I1105 18:03:41.923044   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:03:42.072766   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:03:42.079007   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:03:42.079076   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:03:42.094820   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:03:42.094844   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:03:42.094917   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:03:42.118583   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:03:42.138115   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:03:42.138172   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:03:42.152440   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:03:42.166344   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:03:42.279937   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:03:42.434792   27131 docker.go:233] disabling docker service ...
	I1105 18:03:42.434953   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:03:42.449109   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:03:42.461551   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:03:42.578145   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:03:42.699091   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:03:42.712758   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:03:42.730751   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:03:42.730837   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.741264   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:03:42.741334   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.751371   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.761461   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.771733   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:03:42.782235   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.792151   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.809625   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:03:42.820631   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:03:42.829567   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:03:42.829657   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:03:42.841074   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:03:42.849804   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:03:42.970294   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:03:43.072129   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:03:43.072202   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:03:43.076505   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:03:43.076553   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:03:43.079876   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:03:43.118292   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:03:43.118368   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:03:43.145365   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:03:43.174475   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:03:43.175688   27131 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:03:43.178118   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:43.178392   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:03:43.178429   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:03:43.178616   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:03:43.182299   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:03:43.194156   27131 kubeadm.go:883] updating cluster {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:03:43.194286   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:03:43.194326   27131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:03:43.224139   27131 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 18:03:43.224200   27131 ssh_runner.go:195] Run: which lz4
	I1105 18:03:43.227717   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1105 18:03:43.227803   27131 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 18:03:43.231367   27131 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 18:03:43.231394   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 18:03:44.421241   27131 crio.go:462] duration metric: took 1.193460189s to copy over tarball
	I1105 18:03:44.421309   27131 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 18:03:46.448289   27131 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.026951778s)
	I1105 18:03:46.448321   27131 crio.go:469] duration metric: took 2.027054899s to extract the tarball
	I1105 18:03:46.448331   27131 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 18:03:46.484203   27131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:03:46.526703   27131 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:03:46.526728   27131 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:03:46.526737   27131 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.2 crio true true} ...
	I1105 18:03:46.526839   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:03:46.526923   27131 ssh_runner.go:195] Run: crio config
	I1105 18:03:46.568508   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:46.568526   27131 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 18:03:46.568535   27131 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:03:46.568555   27131 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844661 NodeName:ha-844661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:03:46.568670   27131 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.48"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:03:46.568726   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:03:46.568770   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:03:46.584044   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:03:46.584179   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:03:46.584237   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:03:46.593564   27131 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:03:46.593616   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 18:03:46.602413   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1105 18:03:46.618161   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:03:46.634586   27131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1105 18:03:46.650181   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1105 18:03:46.665377   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:03:46.668925   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:03:46.679986   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:03:46.788039   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:03:46.803466   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.48
	I1105 18:03:46.803487   27131 certs.go:194] generating shared ca certs ...
	I1105 18:03:46.803503   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.803661   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:03:46.803717   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:03:46.803731   27131 certs.go:256] generating profile certs ...
	I1105 18:03:46.803788   27131 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:03:46.803806   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt with IP's: []
	I1105 18:03:46.868048   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt ...
	I1105 18:03:46.868073   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt: {Name:mk1b1384fd11cca80823d77e811ce40ed13a39a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.868260   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key ...
	I1105 18:03:46.868273   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key: {Name:mk63b8cd2995063e8f249e25659d0d581c1c609d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:46.868372   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a
	I1105 18:03:46.868394   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.254]
	I1105 18:03:47.168393   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a ...
	I1105 18:03:47.168422   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a: {Name:mkfb181b3090bd8c3e2b4c01d3e8bebb9949241a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.168598   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a ...
	I1105 18:03:47.168612   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a: {Name:mk8ee51e070e9f8f3516c15edb86d588cc060b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.168716   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.30379b6a -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:03:47.168827   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.30379b6a -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:03:47.168910   27131 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:03:47.168929   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt with IP's: []
	I1105 18:03:47.272330   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt ...
	I1105 18:03:47.272363   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt: {Name:mkef37902a8eaa82f4513587418829011c41aa9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.272551   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key ...
	I1105 18:03:47.272567   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key: {Name:mka47632f74c8924a4575ad6d317d9db035f5aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:03:47.272701   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:03:47.272727   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:03:47.272746   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:03:47.272764   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:03:47.272788   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:03:47.272803   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:03:47.272820   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:03:47.272860   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:03:47.272935   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:03:47.272983   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:03:47.272995   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:03:47.273029   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:03:47.273061   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:03:47.273095   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:03:47.273147   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:03:47.273189   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.273209   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.273227   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.273815   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:03:47.298487   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:03:47.321311   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:03:47.343337   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:03:47.365041   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 18:03:47.387466   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:03:47.409231   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:03:47.430651   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:03:47.452212   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:03:47.474137   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:03:47.495806   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:03:47.517223   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:03:47.532167   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:03:47.537576   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:03:47.549952   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.556864   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.556922   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:03:47.564072   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:03:47.575807   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:03:47.588714   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.593382   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.593445   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:03:47.601274   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:03:47.613497   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:03:47.623268   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.627461   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.627512   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:03:47.632828   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:03:47.642821   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:03:47.646365   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:03:47.646411   27131 kubeadm.go:392] StartCluster: {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:03:47.646477   27131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:03:47.646544   27131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:03:47.682117   27131 cri.go:89] found id: ""
	I1105 18:03:47.682186   27131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:03:47.691260   27131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 18:03:47.700258   27131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:03:47.708885   27131 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:03:47.708907   27131 kubeadm.go:157] found existing configuration files:
	
	I1105 18:03:47.708950   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:03:47.717439   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:03:47.717497   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:03:47.726246   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:03:47.734558   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:03:47.734611   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:03:47.743183   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:03:47.751387   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:03:47.751433   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:03:47.760203   27131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:03:47.768178   27131 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:03:47.768234   27131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:03:47.776770   27131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 18:03:47.967353   27131 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 18:03:59.183523   27131 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 18:03:59.183604   27131 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 18:03:59.183699   27131 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 18:03:59.183848   27131 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 18:03:59.183952   27131 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 18:03:59.184008   27131 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 18:03:59.185602   27131 out.go:235]   - Generating certificates and keys ...
	I1105 18:03:59.185696   27131 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 18:03:59.185773   27131 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 18:03:59.185856   27131 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 18:03:59.185912   27131 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 18:03:59.185997   27131 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 18:03:59.186086   27131 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 18:03:59.186173   27131 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 18:03:59.186341   27131 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-844661 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1105 18:03:59.186418   27131 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 18:03:59.186574   27131 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-844661 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1105 18:03:59.186680   27131 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 18:03:59.186753   27131 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 18:03:59.186826   27131 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 18:03:59.186915   27131 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 18:03:59.187003   27131 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 18:03:59.187068   27131 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 18:03:59.187122   27131 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 18:03:59.187247   27131 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 18:03:59.187350   27131 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 18:03:59.187464   27131 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 18:03:59.187595   27131 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 18:03:59.189162   27131 out.go:235]   - Booting up control plane ...
	I1105 18:03:59.189263   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 18:03:59.189330   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 18:03:59.189411   27131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 18:03:59.189560   27131 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 18:03:59.189674   27131 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 18:03:59.189732   27131 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 18:03:59.189870   27131 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 18:03:59.190000   27131 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 18:03:59.190063   27131 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.0020676s
	I1105 18:03:59.190152   27131 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 18:03:59.190232   27131 kubeadm.go:310] [api-check] The API server is healthy after 5.797330373s
	I1105 18:03:59.190371   27131 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 18:03:59.190545   27131 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 18:03:59.190621   27131 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 18:03:59.190819   27131 kubeadm.go:310] [mark-control-plane] Marking the node ha-844661 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 18:03:59.190908   27131 kubeadm.go:310] [bootstrap-token] Using token: 87pfeh.t954ki35wy37ojkf
	I1105 18:03:59.192164   27131 out.go:235]   - Configuring RBAC rules ...
	I1105 18:03:59.192251   27131 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 18:03:59.192336   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 18:03:59.192519   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 18:03:59.192749   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 18:03:59.192914   27131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 18:03:59.193036   27131 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 18:03:59.193159   27131 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 18:03:59.193205   27131 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 18:03:59.193263   27131 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 18:03:59.193287   27131 kubeadm.go:310] 
	I1105 18:03:59.193351   27131 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 18:03:59.193361   27131 kubeadm.go:310] 
	I1105 18:03:59.193483   27131 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 18:03:59.193498   27131 kubeadm.go:310] 
	I1105 18:03:59.193525   27131 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 18:03:59.193576   27131 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 18:03:59.193636   27131 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 18:03:59.193642   27131 kubeadm.go:310] 
	I1105 18:03:59.193690   27131 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 18:03:59.193695   27131 kubeadm.go:310] 
	I1105 18:03:59.193734   27131 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 18:03:59.193739   27131 kubeadm.go:310] 
	I1105 18:03:59.193790   27131 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 18:03:59.193854   27131 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 18:03:59.193915   27131 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 18:03:59.193921   27131 kubeadm.go:310] 
	I1105 18:03:59.193994   27131 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 18:03:59.194085   27131 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 18:03:59.194112   27131 kubeadm.go:310] 
	I1105 18:03:59.194272   27131 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 87pfeh.t954ki35wy37ojkf \
	I1105 18:03:59.194366   27131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 18:03:59.194391   27131 kubeadm.go:310] 	--control-plane 
	I1105 18:03:59.194397   27131 kubeadm.go:310] 
	I1105 18:03:59.194470   27131 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 18:03:59.194483   27131 kubeadm.go:310] 
	I1105 18:03:59.194599   27131 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 87pfeh.t954ki35wy37ojkf \
	I1105 18:03:59.194713   27131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 18:03:59.194723   27131 cni.go:84] Creating CNI manager for ""
	I1105 18:03:59.194729   27131 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 18:03:59.196416   27131 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1105 18:03:59.198072   27131 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1105 18:03:59.203679   27131 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 18:03:59.203699   27131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1105 18:03:59.220864   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 18:03:59.577751   27131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 18:03:59.577851   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:03:59.577925   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661 minikube.k8s.io/updated_at=2024_11_05T18_03_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=true
	I1105 18:03:59.773949   27131 ops.go:34] apiserver oom_adj: -16
	I1105 18:03:59.774061   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:00.274452   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:00.774925   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:01.274873   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:01.774746   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:02.274653   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:04:02.410257   27131 kubeadm.go:1113] duration metric: took 2.832479659s to wait for elevateKubeSystemPrivileges
	I1105 18:04:02.410297   27131 kubeadm.go:394] duration metric: took 14.763886485s to StartCluster
	I1105 18:04:02.410318   27131 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:02.410399   27131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:02.411281   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:02.411532   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 18:04:02.411550   27131 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:02.411572   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:04:02.411587   27131 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 18:04:02.411670   27131 addons.go:69] Setting storage-provisioner=true in profile "ha-844661"
	I1105 18:04:02.411690   27131 addons.go:234] Setting addon storage-provisioner=true in "ha-844661"
	I1105 18:04:02.411709   27131 addons.go:69] Setting default-storageclass=true in profile "ha-844661"
	I1105 18:04:02.411717   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:02.411726   27131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-844661"
	I1105 18:04:02.411747   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:02.412164   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.412164   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.412207   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.412212   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.427238   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I1105 18:04:02.427311   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I1105 18:04:02.427732   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.427772   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.428176   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.428198   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.428276   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.428292   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.428565   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.428588   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.428730   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.429124   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.429169   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.430653   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:02.430886   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 18:04:02.431352   27131 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 18:04:02.431554   27131 addons.go:234] Setting addon default-storageclass=true in "ha-844661"
	I1105 18:04:02.431592   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:02.431879   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.431911   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.444788   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1105 18:04:02.445225   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.445776   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.445800   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.446109   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.446308   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.446715   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I1105 18:04:02.447172   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.447626   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.447652   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.447978   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.447989   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:02.448526   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:02.448566   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:02.450053   27131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:04:02.451430   27131 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:04:02.451447   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 18:04:02.451465   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:02.453936   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.454325   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:02.454352   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.454596   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:02.454747   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:02.454895   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:02.455039   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:02.463344   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I1105 18:04:02.463824   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:02.464272   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:02.464295   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:02.464580   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:02.464736   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:02.466150   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:02.466325   27131 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 18:04:02.466346   27131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 18:04:02.466366   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:02.468861   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.469292   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:02.469320   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:02.469478   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:02.469641   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:02.469795   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:02.469919   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:02.559386   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 18:04:02.582601   27131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:04:02.634107   27131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 18:04:03.029603   27131 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1105 18:04:03.212900   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.212938   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.212957   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213012   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213238   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213254   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213263   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.213301   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213309   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213317   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213327   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.213335   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.213567   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.213576   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.213601   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213608   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213606   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.213626   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.213684   27131 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 18:04:03.213697   27131 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 18:04:03.213833   27131 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1105 18:04:03.213847   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:03.213858   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:03.213863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:03.230734   27131 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1105 18:04:03.231584   27131 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1105 18:04:03.231606   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:03.231617   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:03.231624   27131 round_trippers.go:473]     Content-Type: application/json
	I1105 18:04:03.231628   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:03.238223   27131 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:04:03.238372   27131 main.go:141] libmachine: Making call to close driver server
	I1105 18:04:03.238386   27131 main.go:141] libmachine: (ha-844661) Calling .Close
	I1105 18:04:03.238717   27131 main.go:141] libmachine: (ha-844661) DBG | Closing plugin on server side
	I1105 18:04:03.238773   27131 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:04:03.238806   27131 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:04:03.241254   27131 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1105 18:04:03.242442   27131 addons.go:510] duration metric: took 830.859112ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1105 18:04:03.242476   27131 start.go:246] waiting for cluster config update ...
	I1105 18:04:03.242491   27131 start.go:255] writing updated cluster config ...
	I1105 18:04:03.244187   27131 out.go:201] 
	I1105 18:04:03.246027   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:03.246146   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:03.247790   27131 out.go:177] * Starting "ha-844661-m02" control-plane node in "ha-844661" cluster
	I1105 18:04:03.248926   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:04:03.248959   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:04:03.249079   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:04:03.249097   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:04:03.249198   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:03.249437   27131 start.go:360] acquireMachinesLock for ha-844661-m02: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:04:03.249497   27131 start.go:364] duration metric: took 35.772µs to acquireMachinesLock for "ha-844661-m02"
	I1105 18:04:03.249518   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:03.249605   27131 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1105 18:04:03.251175   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:04:03.251287   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:03.251335   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:03.267010   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I1105 18:04:03.267624   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:03.268242   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:03.268268   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:03.268591   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:03.268765   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:03.268983   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:03.269146   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:04:03.269172   27131 client.go:168] LocalClient.Create starting
	I1105 18:04:03.269203   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:04:03.269237   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:04:03.269249   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:04:03.269297   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:04:03.269315   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:04:03.269325   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:04:03.269338   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:04:03.269353   27131 main.go:141] libmachine: (ha-844661-m02) Calling .PreCreateCheck
	I1105 18:04:03.269514   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:03.269893   27131 main.go:141] libmachine: Creating machine...
	I1105 18:04:03.269906   27131 main.go:141] libmachine: (ha-844661-m02) Calling .Create
	I1105 18:04:03.270065   27131 main.go:141] libmachine: (ha-844661-m02) Creating KVM machine...
	I1105 18:04:03.271308   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found existing default KVM network
	I1105 18:04:03.271402   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found existing private KVM network mk-ha-844661
	I1105 18:04:03.271535   27131 main.go:141] libmachine: (ha-844661-m02) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 ...
	I1105 18:04:03.271561   27131 main.go:141] libmachine: (ha-844661-m02) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:04:03.271623   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.271523   27490 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:04:03.271709   27131 main.go:141] libmachine: (ha-844661-m02) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:04:03.505902   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.505765   27490 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa...
	I1105 18:04:03.597676   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.597557   27490 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/ha-844661-m02.rawdisk...
	I1105 18:04:03.597706   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Writing magic tar header
	I1105 18:04:03.597716   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Writing SSH key tar header
	I1105 18:04:03.597724   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:03.597692   27490 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 ...
	I1105 18:04:03.597812   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02
	I1105 18:04:03.597845   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:04:03.597903   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:04:03.597916   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02 (perms=drwx------)
	I1105 18:04:03.597939   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:04:03.597948   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:04:03.597957   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:04:03.597965   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:04:03.597973   27131 main.go:141] libmachine: (ha-844661-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:04:03.597977   27131 main.go:141] libmachine: (ha-844661-m02) Creating domain...
	I1105 18:04:03.598013   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:04:03.598038   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:04:03.598049   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:04:03.598061   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Checking permissions on dir: /home
	I1105 18:04:03.598072   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Skipping /home - not owner
	I1105 18:04:03.598898   27131 main.go:141] libmachine: (ha-844661-m02) define libvirt domain using xml: 
	I1105 18:04:03.598916   27131 main.go:141] libmachine: (ha-844661-m02) <domain type='kvm'>
	I1105 18:04:03.598925   27131 main.go:141] libmachine: (ha-844661-m02)   <name>ha-844661-m02</name>
	I1105 18:04:03.598932   27131 main.go:141] libmachine: (ha-844661-m02)   <memory unit='MiB'>2200</memory>
	I1105 18:04:03.598941   27131 main.go:141] libmachine: (ha-844661-m02)   <vcpu>2</vcpu>
	I1105 18:04:03.598947   27131 main.go:141] libmachine: (ha-844661-m02)   <features>
	I1105 18:04:03.598959   27131 main.go:141] libmachine: (ha-844661-m02)     <acpi/>
	I1105 18:04:03.598965   27131 main.go:141] libmachine: (ha-844661-m02)     <apic/>
	I1105 18:04:03.598984   27131 main.go:141] libmachine: (ha-844661-m02)     <pae/>
	I1105 18:04:03.598993   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599024   27131 main.go:141] libmachine: (ha-844661-m02)   </features>
	I1105 18:04:03.599044   27131 main.go:141] libmachine: (ha-844661-m02)   <cpu mode='host-passthrough'>
	I1105 18:04:03.599055   27131 main.go:141] libmachine: (ha-844661-m02)   
	I1105 18:04:03.599061   27131 main.go:141] libmachine: (ha-844661-m02)   </cpu>
	I1105 18:04:03.599069   27131 main.go:141] libmachine: (ha-844661-m02)   <os>
	I1105 18:04:03.599077   27131 main.go:141] libmachine: (ha-844661-m02)     <type>hvm</type>
	I1105 18:04:03.599086   27131 main.go:141] libmachine: (ha-844661-m02)     <boot dev='cdrom'/>
	I1105 18:04:03.599093   27131 main.go:141] libmachine: (ha-844661-m02)     <boot dev='hd'/>
	I1105 18:04:03.599109   27131 main.go:141] libmachine: (ha-844661-m02)     <bootmenu enable='no'/>
	I1105 18:04:03.599120   27131 main.go:141] libmachine: (ha-844661-m02)   </os>
	I1105 18:04:03.599128   27131 main.go:141] libmachine: (ha-844661-m02)   <devices>
	I1105 18:04:03.599142   27131 main.go:141] libmachine: (ha-844661-m02)     <disk type='file' device='cdrom'>
	I1105 18:04:03.599158   27131 main.go:141] libmachine: (ha-844661-m02)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/boot2docker.iso'/>
	I1105 18:04:03.599168   27131 main.go:141] libmachine: (ha-844661-m02)       <target dev='hdc' bus='scsi'/>
	I1105 18:04:03.599177   27131 main.go:141] libmachine: (ha-844661-m02)       <readonly/>
	I1105 18:04:03.599191   27131 main.go:141] libmachine: (ha-844661-m02)     </disk>
	I1105 18:04:03.599203   27131 main.go:141] libmachine: (ha-844661-m02)     <disk type='file' device='disk'>
	I1105 18:04:03.599219   27131 main.go:141] libmachine: (ha-844661-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:04:03.599234   27131 main.go:141] libmachine: (ha-844661-m02)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/ha-844661-m02.rawdisk'/>
	I1105 18:04:03.599245   27131 main.go:141] libmachine: (ha-844661-m02)       <target dev='hda' bus='virtio'/>
	I1105 18:04:03.599254   27131 main.go:141] libmachine: (ha-844661-m02)     </disk>
	I1105 18:04:03.599264   27131 main.go:141] libmachine: (ha-844661-m02)     <interface type='network'>
	I1105 18:04:03.599277   27131 main.go:141] libmachine: (ha-844661-m02)       <source network='mk-ha-844661'/>
	I1105 18:04:03.599295   27131 main.go:141] libmachine: (ha-844661-m02)       <model type='virtio'/>
	I1105 18:04:03.599306   27131 main.go:141] libmachine: (ha-844661-m02)     </interface>
	I1105 18:04:03.599316   27131 main.go:141] libmachine: (ha-844661-m02)     <interface type='network'>
	I1105 18:04:03.599328   27131 main.go:141] libmachine: (ha-844661-m02)       <source network='default'/>
	I1105 18:04:03.599336   27131 main.go:141] libmachine: (ha-844661-m02)       <model type='virtio'/>
	I1105 18:04:03.599346   27131 main.go:141] libmachine: (ha-844661-m02)     </interface>
	I1105 18:04:03.599360   27131 main.go:141] libmachine: (ha-844661-m02)     <serial type='pty'>
	I1105 18:04:03.599371   27131 main.go:141] libmachine: (ha-844661-m02)       <target port='0'/>
	I1105 18:04:03.599379   27131 main.go:141] libmachine: (ha-844661-m02)     </serial>
	I1105 18:04:03.599388   27131 main.go:141] libmachine: (ha-844661-m02)     <console type='pty'>
	I1105 18:04:03.599395   27131 main.go:141] libmachine: (ha-844661-m02)       <target type='serial' port='0'/>
	I1105 18:04:03.599405   27131 main.go:141] libmachine: (ha-844661-m02)     </console>
	I1105 18:04:03.599414   27131 main.go:141] libmachine: (ha-844661-m02)     <rng model='virtio'>
	I1105 18:04:03.599426   27131 main.go:141] libmachine: (ha-844661-m02)       <backend model='random'>/dev/random</backend>
	I1105 18:04:03.599433   27131 main.go:141] libmachine: (ha-844661-m02)     </rng>
	I1105 18:04:03.599441   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599450   27131 main.go:141] libmachine: (ha-844661-m02)     
	I1105 18:04:03.599458   27131 main.go:141] libmachine: (ha-844661-m02)   </devices>
	I1105 18:04:03.599468   27131 main.go:141] libmachine: (ha-844661-m02) </domain>
	I1105 18:04:03.599478   27131 main.go:141] libmachine: (ha-844661-m02) 
	I1105 18:04:03.606202   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:bc:44:b3 in network default
	I1105 18:04:03.606844   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring networks are active...
	I1105 18:04:03.606873   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:03.607579   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring network default is active
	I1105 18:04:03.607877   27131 main.go:141] libmachine: (ha-844661-m02) Ensuring network mk-ha-844661 is active
	I1105 18:04:03.608339   27131 main.go:141] libmachine: (ha-844661-m02) Getting domain xml...
	I1105 18:04:03.609124   27131 main.go:141] libmachine: (ha-844661-m02) Creating domain...
	I1105 18:04:04.804854   27131 main.go:141] libmachine: (ha-844661-m02) Waiting to get IP...
	I1105 18:04:04.805676   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:04.806067   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:04.806128   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:04.806059   27490 retry.go:31] will retry after 221.645511ms: waiting for machine to come up
	I1105 18:04:05.029505   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.029976   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.030010   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.029926   27490 retry.go:31] will retry after 382.599739ms: waiting for machine to come up
	I1105 18:04:05.414471   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.414907   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.414933   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.414864   27490 retry.go:31] will retry after 327.048237ms: waiting for machine to come up
	I1105 18:04:05.743302   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:05.743771   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:05.743804   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:05.743710   27490 retry.go:31] will retry after 518.430277ms: waiting for machine to come up
	I1105 18:04:06.263310   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:06.263829   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:06.263853   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:06.263789   27490 retry.go:31] will retry after 629.481848ms: waiting for machine to come up
	I1105 18:04:06.894494   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:06.895089   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:06.895118   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:06.895038   27490 retry.go:31] will retry after 880.755684ms: waiting for machine to come up
	I1105 18:04:07.777105   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:07.777585   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:07.777629   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:07.777517   27490 retry.go:31] will retry after 728.781586ms: waiting for machine to come up
	I1105 18:04:08.507833   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:08.508322   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:08.508350   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:08.508268   27490 retry.go:31] will retry after 1.405343367s: waiting for machine to come up
	I1105 18:04:09.915737   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:09.916175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:09.916206   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:09.916130   27490 retry.go:31] will retry after 1.614277424s: waiting for machine to come up
	I1105 18:04:11.532132   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:11.532606   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:11.532651   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:11.532528   27490 retry.go:31] will retry after 2.182290087s: waiting for machine to come up
	I1105 18:04:13.716671   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:13.717064   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:13.717090   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:13.717036   27490 retry.go:31] will retry after 2.181711488s: waiting for machine to come up
	I1105 18:04:15.901246   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:15.901742   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:15.901769   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:15.901678   27490 retry.go:31] will retry after 3.553887492s: waiting for machine to come up
	I1105 18:04:19.457631   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:19.458252   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:19.458280   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:19.458200   27490 retry.go:31] will retry after 2.842714356s: waiting for machine to come up
	I1105 18:04:22.304175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:22.304555   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find current IP address of domain ha-844661-m02 in network mk-ha-844661
	I1105 18:04:22.304577   27131 main.go:141] libmachine: (ha-844661-m02) DBG | I1105 18:04:22.304516   27490 retry.go:31] will retry after 4.429177675s: waiting for machine to come up
	I1105 18:04:26.738445   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.738953   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has current primary IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.739021   27131 main.go:141] libmachine: (ha-844661-m02) Found IP for machine: 192.168.39.38
	I1105 18:04:26.739034   27131 main.go:141] libmachine: (ha-844661-m02) Reserving static IP address...
	I1105 18:04:26.739350   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find host DHCP lease matching {name: "ha-844661-m02", mac: "52:54:00:46:71:ad", ip: "192.168.39.38"} in network mk-ha-844661
	I1105 18:04:26.812299   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Getting to WaitForSSH function...
	I1105 18:04:26.812324   27131 main.go:141] libmachine: (ha-844661-m02) Reserved static IP address: 192.168.39.38
	I1105 18:04:26.812336   27131 main.go:141] libmachine: (ha-844661-m02) Waiting for SSH to be available...
	I1105 18:04:26.815175   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:26.815513   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661
	I1105 18:04:26.815540   27131 main.go:141] libmachine: (ha-844661-m02) DBG | unable to find defined IP address of network mk-ha-844661 interface with MAC address 52:54:00:46:71:ad
	I1105 18:04:26.815668   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH client type: external
	I1105 18:04:26.815699   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa (-rw-------)
	I1105 18:04:26.815752   27131 main.go:141] libmachine: (ha-844661-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:04:26.815781   27131 main.go:141] libmachine: (ha-844661-m02) DBG | About to run SSH command:
	I1105 18:04:26.815798   27131 main.go:141] libmachine: (ha-844661-m02) DBG | exit 0
	I1105 18:04:26.819693   27131 main.go:141] libmachine: (ha-844661-m02) DBG | SSH cmd err, output: exit status 255: 
	I1105 18:04:26.819710   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1105 18:04:26.819733   27131 main.go:141] libmachine: (ha-844661-m02) DBG | command : exit 0
	I1105 18:04:26.819747   27131 main.go:141] libmachine: (ha-844661-m02) DBG | err     : exit status 255
	I1105 18:04:26.819758   27131 main.go:141] libmachine: (ha-844661-m02) DBG | output  : 
	I1105 18:04:29.821203   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Getting to WaitForSSH function...
	I1105 18:04:29.823337   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.823729   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:29.823762   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.823872   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH client type: external
	I1105 18:04:29.823894   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa (-rw-------)
	I1105 18:04:29.823922   27131 main.go:141] libmachine: (ha-844661-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:04:29.823940   27131 main.go:141] libmachine: (ha-844661-m02) DBG | About to run SSH command:
	I1105 18:04:29.823952   27131 main.go:141] libmachine: (ha-844661-m02) DBG | exit 0
	I1105 18:04:29.951085   27131 main.go:141] libmachine: (ha-844661-m02) DBG | SSH cmd err, output: <nil>: 
	I1105 18:04:29.951342   27131 main.go:141] libmachine: (ha-844661-m02) KVM machine creation complete!
	I1105 18:04:29.951700   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:29.952363   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:29.952587   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:29.952760   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:04:29.952794   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetState
	I1105 18:04:29.954134   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:04:29.954148   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:04:29.954153   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:04:29.954158   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:29.956382   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.956701   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:29.956727   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:29.956885   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:29.957041   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:29.957158   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:29.957245   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:29.957384   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:29.957587   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:29.957598   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:04:30.062109   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:04:30.062134   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:04:30.062144   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.064857   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.065391   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.065423   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.065611   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.065805   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.065970   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.066128   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.066292   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.066496   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.066512   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:04:30.175484   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:04:30.175559   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:04:30.175573   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:04:30.175583   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.175860   27131 buildroot.go:166] provisioning hostname "ha-844661-m02"
	I1105 18:04:30.175892   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.176101   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.178534   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.178884   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.178952   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.179036   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.179212   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.179364   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.179519   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.179693   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.179914   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.179935   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661-m02 && echo "ha-844661-m02" | sudo tee /etc/hostname
	I1105 18:04:30.302286   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661-m02
	
	I1105 18:04:30.302313   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.305041   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.305376   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.305397   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.305565   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.305735   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.305864   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.306027   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.306153   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.306345   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.306368   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:04:30.418880   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:04:30.418913   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:04:30.418933   27131 buildroot.go:174] setting up certificates
	I1105 18:04:30.418944   27131 provision.go:84] configureAuth start
	I1105 18:04:30.418958   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetMachineName
	I1105 18:04:30.419230   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:30.421818   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.422198   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.422218   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.422357   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.424553   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.424893   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.424934   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.425058   27131 provision.go:143] copyHostCerts
	I1105 18:04:30.425085   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:04:30.425123   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:04:30.425135   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:04:30.425209   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:04:30.425294   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:04:30.425312   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:04:30.425316   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:04:30.425339   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:04:30.425392   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:04:30.425411   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:04:30.425417   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:04:30.425437   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:04:30.425500   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661-m02 san=[127.0.0.1 192.168.39.38 ha-844661-m02 localhost minikube]
	I1105 18:04:30.669687   27131 provision.go:177] copyRemoteCerts
	I1105 18:04:30.669745   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:04:30.669767   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.672398   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.672764   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.672792   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.672964   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.673166   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.673319   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.673440   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:30.757634   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:04:30.757707   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:04:30.779929   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:04:30.779991   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:04:30.802282   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:04:30.802340   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:04:30.824080   27131 provision.go:87] duration metric: took 405.122043ms to configureAuth
	I1105 18:04:30.824105   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:04:30.824267   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:30.824337   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:30.826767   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.827187   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:30.827210   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:30.827374   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:30.827574   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.827761   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:30.827911   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:30.828074   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:30.828241   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:30.828257   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:04:31.054134   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:04:31.054167   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:04:31.054177   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetURL
	I1105 18:04:31.055397   27131 main.go:141] libmachine: (ha-844661-m02) DBG | Using libvirt version 6000000
	I1105 18:04:31.057579   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.057909   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.057942   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.058035   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:04:31.058055   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:04:31.058063   27131 client.go:171] duration metric: took 27.788882282s to LocalClient.Create
	I1105 18:04:31.058089   27131 start.go:167] duration metric: took 27.788944247s to libmachine.API.Create "ha-844661"
	I1105 18:04:31.058102   27131 start.go:293] postStartSetup for "ha-844661-m02" (driver="kvm2")
	I1105 18:04:31.058116   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:04:31.058140   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.058392   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:04:31.058416   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.060812   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.061181   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.061207   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.061372   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.061520   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.061638   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.061750   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.141343   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:04:31.145282   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:04:31.145305   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:04:31.145386   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:04:31.145475   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:04:31.145487   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:04:31.145583   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:04:31.154867   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:04:31.177214   27131 start.go:296] duration metric: took 119.098287ms for postStartSetup
	I1105 18:04:31.177266   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetConfigRaw
	I1105 18:04:31.177795   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:31.180218   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.180581   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.180609   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.180893   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:04:31.181127   27131 start.go:128] duration metric: took 27.931509235s to createHost
	I1105 18:04:31.181151   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.183589   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.183931   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.183977   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.184093   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.184255   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.184473   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.184627   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.184776   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:04:31.184927   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1105 18:04:31.184936   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:04:31.291832   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829871.274251077
	
	I1105 18:04:31.291862   27131 fix.go:216] guest clock: 1730829871.274251077
	I1105 18:04:31.291873   27131 fix.go:229] Guest: 2024-11-05 18:04:31.274251077 +0000 UTC Remote: 2024-11-05 18:04:31.181141215 +0000 UTC m=+70.565834196 (delta=93.109862ms)
	I1105 18:04:31.291893   27131 fix.go:200] guest clock delta is within tolerance: 93.109862ms
	I1105 18:04:31.291902   27131 start.go:83] releasing machines lock for "ha-844661-m02", held for 28.042391542s
	I1105 18:04:31.291933   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.292188   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:31.294847   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.295152   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.295182   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.297372   27131 out.go:177] * Found network options:
	I1105 18:04:31.298882   27131 out.go:177]   - NO_PROXY=192.168.39.48
	W1105 18:04:31.300182   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:04:31.300214   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.300744   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.300953   27131 main.go:141] libmachine: (ha-844661-m02) Calling .DriverName
	I1105 18:04:31.301049   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:04:31.301078   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	W1105 18:04:31.301139   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:04:31.301229   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:04:31.301249   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHHostname
	I1105 18:04:31.303834   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304115   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304147   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.304164   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304340   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.304518   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.304656   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:31.304683   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:31.304705   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.304817   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHPort
	I1105 18:04:31.304875   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.304966   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHKeyPath
	I1105 18:04:31.305123   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetSSHUsername
	I1105 18:04:31.305293   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m02/id_rsa Username:docker}
	I1105 18:04:31.537813   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:04:31.543318   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:04:31.543380   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:04:31.558192   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:04:31.558214   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:04:31.558265   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:04:31.574444   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:04:31.588020   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:04:31.588073   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:04:31.601225   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:04:31.614872   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:04:31.742673   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:04:31.906474   27131 docker.go:233] disabling docker service ...
	I1105 18:04:31.906547   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:04:31.920407   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:04:31.932829   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:04:32.065646   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:04:32.198693   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:04:32.211636   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:04:32.228537   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:04:32.228604   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.238359   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:04:32.238426   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.248245   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.258019   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.267772   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:04:32.277903   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.287745   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.304428   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:04:32.315166   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:04:32.324687   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:04:32.324739   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:04:32.338701   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:04:32.349299   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:32.473469   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:04:32.562263   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:04:32.562341   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:04:32.567966   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:04:32.568012   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:04:32.571415   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:04:32.608501   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:04:32.608591   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:04:32.636314   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:04:32.664649   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:04:32.666073   27131 out.go:177]   - env NO_PROXY=192.168.39.48
	I1105 18:04:32.667578   27131 main.go:141] libmachine: (ha-844661-m02) Calling .GetIP
	I1105 18:04:32.670054   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:32.670404   27131 main.go:141] libmachine: (ha-844661-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:71:ad", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:04:17 +0000 UTC Type:0 Mac:52:54:00:46:71:ad Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-844661-m02 Clientid:01:52:54:00:46:71:ad}
	I1105 18:04:32.670434   27131 main.go:141] libmachine: (ha-844661-m02) DBG | domain ha-844661-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:46:71:ad in network mk-ha-844661
	I1105 18:04:32.670640   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:04:32.675107   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:04:32.687100   27131 mustload.go:65] Loading cluster: ha-844661
	I1105 18:04:32.687313   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:32.687563   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:32.687614   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:32.702173   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I1105 18:04:32.702544   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:32.703040   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:32.703059   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:32.703356   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:32.703527   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:04:32.705121   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:32.705395   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:32.705427   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:32.719590   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I1105 18:04:32.719963   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:32.720450   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:32.720471   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:32.720753   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:32.720928   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:32.721076   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.38
	I1105 18:04:32.721087   27131 certs.go:194] generating shared ca certs ...
	I1105 18:04:32.721099   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.721216   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:04:32.721253   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:04:32.721262   27131 certs.go:256] generating profile certs ...
	I1105 18:04:32.721325   27131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:04:32.721348   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8
	I1105 18:04:32.721359   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.254]
	I1105 18:04:32.817294   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 ...
	I1105 18:04:32.817319   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8: {Name:mk45feacdbeaf35fb15921aeeafdbedf19f7f2ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.817474   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8 ...
	I1105 18:04:32.817487   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8: {Name:mkf0dcf762cb289770c94346689eba9d112e92a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:04:32.817551   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.45e743c8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:04:32.817676   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.45e743c8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:04:32.817799   27131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:04:32.817813   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:04:32.817827   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:04:32.817838   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:04:32.817853   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:04:32.817867   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:04:32.817879   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:04:32.817890   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:04:32.817899   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:04:32.817954   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:04:32.817983   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:04:32.817992   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:04:32.818014   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:04:32.818034   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:04:32.818055   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:04:32.818093   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:04:32.818118   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:04:32.818132   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:04:32.818145   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:32.818175   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:32.821627   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:32.822087   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:32.822115   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:32.822324   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:32.822514   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:32.822635   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:32.822754   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:32.895384   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:04:32.901151   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:04:32.911563   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:04:32.916135   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1105 18:04:32.926023   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:04:32.929795   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:04:32.939479   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:04:32.943460   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:04:32.953743   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:04:32.957464   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:04:32.967126   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:04:32.971370   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 18:04:32.981265   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:04:33.005948   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:04:33.028537   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:04:33.051691   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:04:33.077296   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 18:04:33.099924   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:04:33.122118   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:04:33.144496   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:04:33.167061   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:04:33.189719   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:04:33.212311   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:04:33.234431   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:04:33.249569   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1105 18:04:33.264947   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:04:33.280382   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:04:33.295047   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:04:33.310658   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 18:04:33.325227   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:04:33.340438   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:04:33.345637   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:04:33.355163   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.359277   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.359332   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:04:33.364640   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:04:33.374197   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:04:33.383883   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.388205   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.388269   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:04:33.393534   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:04:33.403611   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:04:33.413496   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.417522   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.417572   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:04:33.422911   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:04:33.432783   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:04:33.436475   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:04:33.436531   27131 kubeadm.go:934] updating node {m02 192.168.39.38 8443 v1.31.2 crio true true} ...
	I1105 18:04:33.436634   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:04:33.436658   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:04:33.436695   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:04:33.453065   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:04:33.453148   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:04:33.453221   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:04:33.462691   27131 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 18:04:33.462762   27131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 18:04:33.472553   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 18:04:33.472563   27131 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1105 18:04:33.472583   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:04:33.472584   27131 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1105 18:04:33.472655   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:04:33.477105   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 18:04:33.477133   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 18:04:34.400283   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:04:34.400361   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:04:34.405010   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 18:04:34.405045   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 18:04:34.538786   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:04:34.578282   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:04:34.578382   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:04:34.588498   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 18:04:34.588540   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 18:04:34.951438   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:04:34.960448   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1105 18:04:34.976680   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:04:34.992424   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:04:35.007877   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:04:35.011593   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:04:35.023033   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:35.153794   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:04:35.171325   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:04:35.171790   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:04:35.171844   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:04:35.187008   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I1105 18:04:35.187511   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:04:35.188000   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:04:35.188021   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:04:35.188401   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:04:35.188593   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:04:35.188755   27131 start.go:317] joinCluster: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:04:35.188861   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 18:04:35.188876   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:04:35.192373   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:35.193007   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:04:35.193036   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:04:35.193153   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:04:35.193322   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:04:35.193493   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:04:35.193633   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:04:35.352325   27131 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:35.352369   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token io85g1.ce9beps1a5sdfopc --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m02 --control-plane --apiserver-advertise-address=192.168.39.38 --apiserver-bind-port=8443"
	I1105 18:04:56.900009   27131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token io85g1.ce9beps1a5sdfopc --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m02 --control-plane --apiserver-advertise-address=192.168.39.38 --apiserver-bind-port=8443": (21.547609543s)
	I1105 18:04:56.900049   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 18:04:57.434153   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661-m02 minikube.k8s.io/updated_at=2024_11_05T18_04_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=false
	I1105 18:04:57.562849   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844661-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 18:04:57.694503   27131 start.go:319] duration metric: took 22.505743601s to joinCluster
	I1105 18:04:57.694592   27131 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:04:57.694912   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:57.695940   27131 out.go:177] * Verifying Kubernetes components...
	I1105 18:04:57.697102   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:04:57.983429   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:04:58.029548   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:04:58.029888   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:04:58.029994   27131 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.48:8443
	I1105 18:04:58.030271   27131 node_ready.go:35] waiting up to 6m0s for node "ha-844661-m02" to be "Ready" ...
	I1105 18:04:58.030407   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:58.030418   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:58.030429   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:58.030436   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:58.043836   27131 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1105 18:04:58.531097   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:58.531124   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:58.531135   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:58.531142   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:58.543712   27131 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1105 18:04:59.030878   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:59.030899   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:59.030908   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:59.030912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:59.035656   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:04:59.530596   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:04:59.530621   27131 round_trippers.go:469] Request Headers:
	I1105 18:04:59.530633   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:04:59.530639   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:04:59.534120   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:00.030984   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:00.031006   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:00.031014   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:00.031017   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:00.034282   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:00.035034   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:00.530821   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:00.530846   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:00.530858   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:00.530864   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:00.536618   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:05:01.031310   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:01.031331   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:01.031340   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:01.031345   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:01.034641   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:01.530557   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:01.530578   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:01.530588   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:01.530595   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:01.539049   27131 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1105 18:05:02.031172   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:02.031197   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:02.031206   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:02.031210   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:02.034664   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:02.035295   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:02.531134   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:02.531158   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:02.531168   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:02.531173   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:02.534691   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:03.030649   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:03.030676   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:03.030684   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:03.030689   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:03.034294   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:03.531341   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:03.531362   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:03.531370   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:03.531374   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:03.534345   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:04.031389   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:04.031412   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:04.031420   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:04.031425   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:04.034432   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:04.531089   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:04.531121   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:04.531130   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:04.531134   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:04.534592   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:04.535270   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:05.030583   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:05.030606   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:05.030614   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:05.030618   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:05.034321   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:05.530714   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:05.530735   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:05.530744   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:05.530748   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:05.534305   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:06.031071   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:06.031093   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:06.031101   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:06.031105   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:06.034416   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:06.531473   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:06.531497   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:06.531506   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:06.531513   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:06.534473   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:07.030494   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:07.030518   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:07.030526   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:07.030530   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:07.033934   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:07.034429   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:07.530834   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:07.530861   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:07.530871   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:07.530876   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:07.534136   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:08.031065   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:08.031086   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:08.031094   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:08.031097   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:08.034490   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:08.530752   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:08.530774   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:08.530782   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:08.530787   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:08.534189   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:09.030956   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:09.030998   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:09.031007   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:09.031013   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:09.034514   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:09.035140   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:09.531531   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:09.531558   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:09.531569   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:09.531577   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:09.534682   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:10.030566   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:10.030603   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:10.030611   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:10.030615   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:10.034288   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:10.530760   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:10.530786   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:10.530797   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:10.530803   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:10.535094   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:11.031135   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:11.031156   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:11.031164   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:11.031167   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:11.034996   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:11.035590   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:11.530958   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:11.531025   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:11.531033   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:11.531036   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:11.534280   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:12.031192   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:12.031217   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:12.031226   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:12.031229   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:12.034799   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:12.530835   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:12.530859   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:12.530866   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:12.530871   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:12.535212   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:13.031138   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:13.031161   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:13.031168   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:13.031174   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:13.035138   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:13.035640   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:13.531336   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:13.531361   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:13.531372   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:13.531377   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:13.534343   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:14.031248   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:14.031269   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:14.031277   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:14.031280   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:14.034318   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:14.531121   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:14.531144   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:14.531152   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:14.531156   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:14.534522   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.031444   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:15.031471   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:15.031481   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:15.031485   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:15.035107   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.531231   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:15.531259   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:15.531295   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:15.531301   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:15.534694   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:15.535240   27131 node_ready.go:53] node "ha-844661-m02" has status "Ready":"False"
	I1105 18:05:16.031143   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:16.031166   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:16.031174   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:16.031178   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:16.034542   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:16.530558   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:16.530585   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:16.530592   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:16.530596   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:16.534438   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.031334   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.031354   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.031363   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.031377   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.034859   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.530585   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.530609   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.530617   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.530621   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.534242   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.534822   27131 node_ready.go:49] node "ha-844661-m02" has status "Ready":"True"
	I1105 18:05:17.534842   27131 node_ready.go:38] duration metric: took 19.504524126s for node "ha-844661-m02" to be "Ready" ...
	I1105 18:05:17.534853   27131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:05:17.534933   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:17.534945   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.534955   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.534962   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.539957   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:17.545365   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.545456   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4bdfz
	I1105 18:05:17.545468   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.545479   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.545485   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.548667   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.549324   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.549340   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.549350   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.549355   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.552460   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.553059   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.553079   27131 pod_ready.go:82] duration metric: took 7.687809ms for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.553089   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.553143   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5g97
	I1105 18:05:17.553151   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.553157   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.553161   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.556133   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.556688   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.556701   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.556708   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.556711   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.559655   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.560102   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.560125   27131 pod_ready.go:82] duration metric: took 7.028626ms for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.560138   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.560192   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661
	I1105 18:05:17.560200   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.560207   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.560211   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.563041   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.563593   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.563605   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.563612   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.563617   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.566382   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:17.566799   27131 pod_ready.go:93] pod "etcd-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.566816   27131 pod_ready.go:82] duration metric: took 6.672004ms for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.566824   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.566881   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m02
	I1105 18:05:17.566890   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.566897   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.566901   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.570076   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.570614   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:17.570630   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.570639   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.570644   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.574134   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.574566   27131 pod_ready.go:93] pod "etcd-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.574584   27131 pod_ready.go:82] duration metric: took 7.753168ms for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.574604   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.730613   27131 request.go:632] Waited for 155.951288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:05:17.730716   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:05:17.730738   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.730750   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.730756   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.734460   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.931599   27131 request.go:632] Waited for 196.455308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.931691   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:17.931703   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:17.931714   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:17.931720   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:17.935472   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:17.936248   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:17.936270   27131 pod_ready.go:82] duration metric: took 361.658171ms for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:17.936283   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.131401   27131 request.go:632] Waited for 195.044956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:05:18.131499   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:05:18.131506   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.131514   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.131520   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.135482   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.331525   27131 request.go:632] Waited for 195.194468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:18.331593   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:18.331598   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.331605   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.331610   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.334692   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.335419   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:18.335438   27131 pod_ready.go:82] duration metric: took 399.143957ms for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.335449   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.530629   27131 request.go:632] Waited for 195.065538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:05:18.530715   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:05:18.530724   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.530734   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.530747   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.534793   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:18.731049   27131 request.go:632] Waited for 195.44458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:18.731128   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:18.731134   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.731143   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.731148   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.734646   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:18.735269   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:18.735297   27131 pod_ready.go:82] duration metric: took 399.840715ms for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.735311   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:18.931233   27131 request.go:632] Waited for 195.850053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:05:18.931303   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:05:18.931310   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:18.931320   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:18.931326   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:18.935301   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.131408   27131 request.go:632] Waited for 195.30965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.131471   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.131476   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.131483   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.131487   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.134983   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.135599   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.135639   27131 pod_ready.go:82] duration metric: took 400.298272ms for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.135650   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.330670   27131 request.go:632] Waited for 194.9293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:05:19.330729   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:05:19.330734   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.330741   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.330745   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.334278   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.531215   27131 request.go:632] Waited for 196.368669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:19.531275   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:19.531280   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.531287   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.531290   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.535032   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.535778   27131 pod_ready.go:93] pod "kube-proxy-pjpkh" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.535799   27131 pod_ready.go:82] duration metric: took 400.142488ms for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.535811   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.730859   27131 request.go:632] Waited for 194.981031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:05:19.730957   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:05:19.730981   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.730993   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.731003   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.734505   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:19.931630   27131 request.go:632] Waited for 196.356772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.931695   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:19.931703   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:19.931713   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:19.931721   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:19.934664   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:05:19.935138   27131 pod_ready.go:93] pod "kube-proxy-zsbfs" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:19.935158   27131 pod_ready.go:82] duration metric: took 399.338721ms for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:19.935171   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.131253   27131 request.go:632] Waited for 196.012842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:05:20.131339   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:05:20.131346   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.131354   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.131365   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.135136   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.331213   27131 request.go:632] Waited for 195.465792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:20.331270   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:05:20.331276   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.331283   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.331287   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.334310   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.334872   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:20.334894   27131 pod_ready.go:82] duration metric: took 399.711008ms for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.334908   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.531014   27131 request.go:632] Waited for 195.998146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:05:20.531072   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:05:20.531077   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.531084   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.531092   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.534503   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.731389   27131 request.go:632] Waited for 196.312857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:20.731476   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:05:20.731488   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.731496   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.731502   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.734866   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:20.735369   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:05:20.735387   27131 pod_ready.go:82] duration metric: took 400.467875ms for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:05:20.735398   27131 pod_ready.go:39] duration metric: took 3.200533347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:05:20.735415   27131 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:05:20.735464   27131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:05:20.751422   27131 api_server.go:72] duration metric: took 23.056783291s to wait for apiserver process to appear ...
	I1105 18:05:20.751455   27131 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:05:20.751507   27131 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1105 18:05:20.755872   27131 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1105 18:05:20.755957   27131 round_trippers.go:463] GET https://192.168.39.48:8443/version
	I1105 18:05:20.755969   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.755980   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.755990   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.756842   27131 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 18:05:20.756943   27131 api_server.go:141] control plane version: v1.31.2
	I1105 18:05:20.756968   27131 api_server.go:131] duration metric: took 5.494459ms to wait for apiserver health ...
	I1105 18:05:20.756978   27131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:05:20.930580   27131 request.go:632] Waited for 173.520285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:20.930658   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:20.930664   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:20.930672   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:20.930676   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:20.935815   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:05:20.939904   27131 system_pods.go:59] 17 kube-system pods found
	I1105 18:05:20.939939   27131 system_pods.go:61] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:05:20.939945   27131 system_pods.go:61] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:05:20.939949   27131 system_pods.go:61] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:05:20.939952   27131 system_pods.go:61] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:05:20.939955   27131 system_pods.go:61] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:05:20.939959   27131 system_pods.go:61] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:05:20.939962   27131 system_pods.go:61] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:05:20.939965   27131 system_pods.go:61] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:05:20.939968   27131 system_pods.go:61] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:05:20.939977   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:05:20.939981   27131 system_pods.go:61] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:05:20.939984   27131 system_pods.go:61] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:05:20.939989   27131 system_pods.go:61] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:05:20.939992   27131 system_pods.go:61] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:05:20.939997   27131 system_pods.go:61] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:05:20.940003   27131 system_pods.go:61] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:05:20.940006   27131 system_pods.go:61] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:05:20.940012   27131 system_pods.go:74] duration metric: took 183.024873ms to wait for pod list to return data ...
	I1105 18:05:20.940022   27131 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:05:21.131476   27131 request.go:632] Waited for 191.3776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:05:21.131535   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:05:21.131540   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.131548   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.131552   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.135052   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:21.135309   27131 default_sa.go:45] found service account: "default"
	I1105 18:05:21.135328   27131 default_sa.go:55] duration metric: took 195.299598ms for default service account to be created ...
	I1105 18:05:21.135339   27131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:05:21.330735   27131 request.go:632] Waited for 195.314096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:21.330794   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:05:21.330799   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.330807   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.330810   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.335501   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:05:21.339693   27131 system_pods.go:86] 17 kube-system pods found
	I1105 18:05:21.339720   27131 system_pods.go:89] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:05:21.339726   27131 system_pods.go:89] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:05:21.339731   27131 system_pods.go:89] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:05:21.339734   27131 system_pods.go:89] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:05:21.339738   27131 system_pods.go:89] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:05:21.339741   27131 system_pods.go:89] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:05:21.339745   27131 system_pods.go:89] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:05:21.339748   27131 system_pods.go:89] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:05:21.339751   27131 system_pods.go:89] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:05:21.339755   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:05:21.339759   27131 system_pods.go:89] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:05:21.339762   27131 system_pods.go:89] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:05:21.339765   27131 system_pods.go:89] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:05:21.339769   27131 system_pods.go:89] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:05:21.339774   27131 system_pods.go:89] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:05:21.339779   27131 system_pods.go:89] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:05:21.339782   27131 system_pods.go:89] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:05:21.339788   27131 system_pods.go:126] duration metric: took 204.442408ms to wait for k8s-apps to be running ...
	I1105 18:05:21.339802   27131 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:05:21.339842   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:05:21.354615   27131 system_svc.go:56] duration metric: took 14.795984ms WaitForService to wait for kubelet
	I1105 18:05:21.354651   27131 kubeadm.go:582] duration metric: took 23.660015871s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:05:21.354696   27131 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:05:21.531068   27131 request.go:632] Waited for 176.291328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes
	I1105 18:05:21.531146   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes
	I1105 18:05:21.531151   27131 round_trippers.go:469] Request Headers:
	I1105 18:05:21.531159   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:05:21.531164   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:05:21.534798   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:05:21.535495   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:05:21.535541   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:05:21.535562   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:05:21.535565   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:05:21.535570   27131 node_conditions.go:105] duration metric: took 180.868401ms to run NodePressure ...
	I1105 18:05:21.535585   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:05:21.535607   27131 start.go:255] writing updated cluster config ...
	I1105 18:05:21.537763   27131 out.go:201] 
	I1105 18:05:21.539166   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:21.539250   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:21.540660   27131 out.go:177] * Starting "ha-844661-m03" control-plane node in "ha-844661" cluster
	I1105 18:05:21.541637   27131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:05:21.541660   27131 cache.go:56] Caching tarball of preloaded images
	I1105 18:05:21.541776   27131 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:05:21.541788   27131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:05:21.541886   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:21.542068   27131 start.go:360] acquireMachinesLock for ha-844661-m03: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:05:21.542109   27131 start.go:364] duration metric: took 21.826µs to acquireMachinesLock for "ha-844661-m03"
	I1105 18:05:21.542124   27131 start.go:93] Provisioning new machine with config: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:05:21.542209   27131 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1105 18:05:21.543860   27131 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:05:21.543943   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:21.543975   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:21.559283   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1105 18:05:21.559671   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:21.560085   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:21.560107   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:21.560440   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:21.560618   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:21.560762   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:21.560967   27131 start.go:159] libmachine.API.Create for "ha-844661" (driver="kvm2")
	I1105 18:05:21.560994   27131 client.go:168] LocalClient.Create starting
	I1105 18:05:21.561031   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:05:21.561079   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:05:21.561096   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:05:21.561164   27131 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:05:21.561192   27131 main.go:141] libmachine: Decoding PEM data...
	I1105 18:05:21.561207   27131 main.go:141] libmachine: Parsing certificate...
	I1105 18:05:21.561232   27131 main.go:141] libmachine: Running pre-create checks...
	I1105 18:05:21.561244   27131 main.go:141] libmachine: (ha-844661-m03) Calling .PreCreateCheck
	I1105 18:05:21.561482   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:21.561876   27131 main.go:141] libmachine: Creating machine...
	I1105 18:05:21.561887   27131 main.go:141] libmachine: (ha-844661-m03) Calling .Create
	I1105 18:05:21.562039   27131 main.go:141] libmachine: (ha-844661-m03) Creating KVM machine...
	I1105 18:05:21.563199   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found existing default KVM network
	I1105 18:05:21.563316   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found existing private KVM network mk-ha-844661
	I1105 18:05:21.563415   27131 main.go:141] libmachine: (ha-844661-m03) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 ...
	I1105 18:05:21.563439   27131 main.go:141] libmachine: (ha-844661-m03) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:05:21.563512   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.563393   27902 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:05:21.563587   27131 main.go:141] libmachine: (ha-844661-m03) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:05:21.796365   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.796229   27902 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa...
	I1105 18:05:21.882674   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.882568   27902 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/ha-844661-m03.rawdisk...
	I1105 18:05:21.882702   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Writing magic tar header
	I1105 18:05:21.882713   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Writing SSH key tar header
	I1105 18:05:21.882768   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:21.882708   27902 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 ...
	I1105 18:05:21.882834   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03
	I1105 18:05:21.882863   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03 (perms=drwx------)
	I1105 18:05:21.882876   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:05:21.882896   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:05:21.882908   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:05:21.882922   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:05:21.882944   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:05:21.882956   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:05:21.883017   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Checking permissions on dir: /home
	I1105 18:05:21.883034   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Skipping /home - not owner
	I1105 18:05:21.883044   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:05:21.883057   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:05:21.883070   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:05:21.883081   27131 main.go:141] libmachine: (ha-844661-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:05:21.883089   27131 main.go:141] libmachine: (ha-844661-m03) Creating domain...
	I1105 18:05:21.883931   27131 main.go:141] libmachine: (ha-844661-m03) define libvirt domain using xml: 
	I1105 18:05:21.883952   27131 main.go:141] libmachine: (ha-844661-m03) <domain type='kvm'>
	I1105 18:05:21.883976   27131 main.go:141] libmachine: (ha-844661-m03)   <name>ha-844661-m03</name>
	I1105 18:05:21.883997   27131 main.go:141] libmachine: (ha-844661-m03)   <memory unit='MiB'>2200</memory>
	I1105 18:05:21.884009   27131 main.go:141] libmachine: (ha-844661-m03)   <vcpu>2</vcpu>
	I1105 18:05:21.884020   27131 main.go:141] libmachine: (ha-844661-m03)   <features>
	I1105 18:05:21.884028   27131 main.go:141] libmachine: (ha-844661-m03)     <acpi/>
	I1105 18:05:21.884038   27131 main.go:141] libmachine: (ha-844661-m03)     <apic/>
	I1105 18:05:21.884046   27131 main.go:141] libmachine: (ha-844661-m03)     <pae/>
	I1105 18:05:21.884056   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884078   27131 main.go:141] libmachine: (ha-844661-m03)   </features>
	I1105 18:05:21.884099   27131 main.go:141] libmachine: (ha-844661-m03)   <cpu mode='host-passthrough'>
	I1105 18:05:21.884109   27131 main.go:141] libmachine: (ha-844661-m03)   
	I1105 18:05:21.884119   27131 main.go:141] libmachine: (ha-844661-m03)   </cpu>
	I1105 18:05:21.884129   27131 main.go:141] libmachine: (ha-844661-m03)   <os>
	I1105 18:05:21.884134   27131 main.go:141] libmachine: (ha-844661-m03)     <type>hvm</type>
	I1105 18:05:21.884144   27131 main.go:141] libmachine: (ha-844661-m03)     <boot dev='cdrom'/>
	I1105 18:05:21.884151   27131 main.go:141] libmachine: (ha-844661-m03)     <boot dev='hd'/>
	I1105 18:05:21.884159   27131 main.go:141] libmachine: (ha-844661-m03)     <bootmenu enable='no'/>
	I1105 18:05:21.884169   27131 main.go:141] libmachine: (ha-844661-m03)   </os>
	I1105 18:05:21.884183   27131 main.go:141] libmachine: (ha-844661-m03)   <devices>
	I1105 18:05:21.884200   27131 main.go:141] libmachine: (ha-844661-m03)     <disk type='file' device='cdrom'>
	I1105 18:05:21.884216   27131 main.go:141] libmachine: (ha-844661-m03)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/boot2docker.iso'/>
	I1105 18:05:21.884227   27131 main.go:141] libmachine: (ha-844661-m03)       <target dev='hdc' bus='scsi'/>
	I1105 18:05:21.884237   27131 main.go:141] libmachine: (ha-844661-m03)       <readonly/>
	I1105 18:05:21.884245   27131 main.go:141] libmachine: (ha-844661-m03)     </disk>
	I1105 18:05:21.884252   27131 main.go:141] libmachine: (ha-844661-m03)     <disk type='file' device='disk'>
	I1105 18:05:21.884260   27131 main.go:141] libmachine: (ha-844661-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:05:21.884267   27131 main.go:141] libmachine: (ha-844661-m03)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/ha-844661-m03.rawdisk'/>
	I1105 18:05:21.884274   27131 main.go:141] libmachine: (ha-844661-m03)       <target dev='hda' bus='virtio'/>
	I1105 18:05:21.884279   27131 main.go:141] libmachine: (ha-844661-m03)     </disk>
	I1105 18:05:21.884289   27131 main.go:141] libmachine: (ha-844661-m03)     <interface type='network'>
	I1105 18:05:21.884295   27131 main.go:141] libmachine: (ha-844661-m03)       <source network='mk-ha-844661'/>
	I1105 18:05:21.884305   27131 main.go:141] libmachine: (ha-844661-m03)       <model type='virtio'/>
	I1105 18:05:21.884313   27131 main.go:141] libmachine: (ha-844661-m03)     </interface>
	I1105 18:05:21.884318   27131 main.go:141] libmachine: (ha-844661-m03)     <interface type='network'>
	I1105 18:05:21.884326   27131 main.go:141] libmachine: (ha-844661-m03)       <source network='default'/>
	I1105 18:05:21.884330   27131 main.go:141] libmachine: (ha-844661-m03)       <model type='virtio'/>
	I1105 18:05:21.884337   27131 main.go:141] libmachine: (ha-844661-m03)     </interface>
	I1105 18:05:21.884341   27131 main.go:141] libmachine: (ha-844661-m03)     <serial type='pty'>
	I1105 18:05:21.884347   27131 main.go:141] libmachine: (ha-844661-m03)       <target port='0'/>
	I1105 18:05:21.884351   27131 main.go:141] libmachine: (ha-844661-m03)     </serial>
	I1105 18:05:21.884358   27131 main.go:141] libmachine: (ha-844661-m03)     <console type='pty'>
	I1105 18:05:21.884363   27131 main.go:141] libmachine: (ha-844661-m03)       <target type='serial' port='0'/>
	I1105 18:05:21.884377   27131 main.go:141] libmachine: (ha-844661-m03)     </console>
	I1105 18:05:21.884395   27131 main.go:141] libmachine: (ha-844661-m03)     <rng model='virtio'>
	I1105 18:05:21.884408   27131 main.go:141] libmachine: (ha-844661-m03)       <backend model='random'>/dev/random</backend>
	I1105 18:05:21.884417   27131 main.go:141] libmachine: (ha-844661-m03)     </rng>
	I1105 18:05:21.884432   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884441   27131 main.go:141] libmachine: (ha-844661-m03)     
	I1105 18:05:21.884448   27131 main.go:141] libmachine: (ha-844661-m03)   </devices>
	I1105 18:05:21.884457   27131 main.go:141] libmachine: (ha-844661-m03) </domain>
	I1105 18:05:21.884464   27131 main.go:141] libmachine: (ha-844661-m03) 
	I1105 18:05:21.890775   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:13:05:59 in network default
	I1105 18:05:21.891360   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring networks are active...
	I1105 18:05:21.891380   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:21.892107   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring network default is active
	I1105 18:05:21.892388   27131 main.go:141] libmachine: (ha-844661-m03) Ensuring network mk-ha-844661 is active
	I1105 18:05:21.892764   27131 main.go:141] libmachine: (ha-844661-m03) Getting domain xml...
	I1105 18:05:21.893494   27131 main.go:141] libmachine: (ha-844661-m03) Creating domain...
	I1105 18:05:23.118308   27131 main.go:141] libmachine: (ha-844661-m03) Waiting to get IP...
	I1105 18:05:23.119070   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.119438   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.119465   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.119424   27902 retry.go:31] will retry after 298.334175ms: waiting for machine to come up
	I1105 18:05:23.419032   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.419605   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.419622   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.419554   27902 retry.go:31] will retry after 273.113851ms: waiting for machine to come up
	I1105 18:05:23.693944   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:23.694349   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:23.694376   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:23.694317   27902 retry.go:31] will retry after 416.726009ms: waiting for machine to come up
	I1105 18:05:24.112851   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:24.113218   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:24.113249   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:24.113181   27902 retry.go:31] will retry after 551.953216ms: waiting for machine to come up
	I1105 18:05:24.666824   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:24.667304   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:24.667333   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:24.667249   27902 retry.go:31] will retry after 466.975145ms: waiting for machine to come up
	I1105 18:05:25.135836   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:25.136271   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:25.136292   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:25.136228   27902 retry.go:31] will retry after 589.586585ms: waiting for machine to come up
	I1105 18:05:25.726897   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:25.727480   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:25.727508   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:25.727434   27902 retry.go:31] will retry after 968.18251ms: waiting for machine to come up
	I1105 18:05:26.697257   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:26.697626   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:26.697652   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:26.697586   27902 retry.go:31] will retry after 1.127611463s: waiting for machine to come up
	I1105 18:05:27.826904   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:27.827312   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:27.827340   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:27.827258   27902 retry.go:31] will retry after 1.342205842s: waiting for machine to come up
	I1105 18:05:29.171618   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:29.172079   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:29.172146   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:29.172073   27902 retry.go:31] will retry after 1.974625708s: waiting for machine to come up
	I1105 18:05:31.148071   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:31.148482   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:31.148499   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:31.148434   27902 retry.go:31] will retry after 2.71055754s: waiting for machine to come up
	I1105 18:05:33.861975   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:33.862458   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:33.862483   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:33.862417   27902 retry.go:31] will retry after 3.509037885s: waiting for machine to come up
	I1105 18:05:37.373198   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:37.373748   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find current IP address of domain ha-844661-m03 in network mk-ha-844661
	I1105 18:05:37.373778   27131 main.go:141] libmachine: (ha-844661-m03) DBG | I1105 18:05:37.373690   27902 retry.go:31] will retry after 4.502442692s: waiting for machine to come up
	I1105 18:05:41.878135   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.878636   27131 main.go:141] libmachine: (ha-844661-m03) Found IP for machine: 192.168.39.52
	I1105 18:05:41.878665   27131 main.go:141] libmachine: (ha-844661-m03) Reserving static IP address...
	I1105 18:05:41.878678   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has current primary IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.879129   27131 main.go:141] libmachine: (ha-844661-m03) DBG | unable to find host DHCP lease matching {name: "ha-844661-m03", mac: "52:54:00:62:70:0e", ip: "192.168.39.52"} in network mk-ha-844661
	I1105 18:05:41.955281   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Getting to WaitForSSH function...
	I1105 18:05:41.955317   27131 main.go:141] libmachine: (ha-844661-m03) Reserved static IP address: 192.168.39.52
	I1105 18:05:41.955331   27131 main.go:141] libmachine: (ha-844661-m03) Waiting for SSH to be available...
	I1105 18:05:41.957358   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.957752   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:41.957781   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:41.957992   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using SSH client type: external
	I1105 18:05:41.958020   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa (-rw-------)
	I1105 18:05:41.958098   27131 main.go:141] libmachine: (ha-844661-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:05:41.958121   27131 main.go:141] libmachine: (ha-844661-m03) DBG | About to run SSH command:
	I1105 18:05:41.958159   27131 main.go:141] libmachine: (ha-844661-m03) DBG | exit 0
	I1105 18:05:42.086743   27131 main.go:141] libmachine: (ha-844661-m03) DBG | SSH cmd err, output: <nil>: 
	I1105 18:05:42.087041   27131 main.go:141] libmachine: (ha-844661-m03) KVM machine creation complete!
	I1105 18:05:42.087332   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:42.087854   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:42.088045   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:42.088232   27131 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:05:42.088247   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetState
	I1105 18:05:42.089254   27131 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:05:42.089266   27131 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:05:42.089278   27131 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:05:42.089283   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.091449   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.091761   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.091789   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.091901   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.092048   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.092179   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.092313   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.092495   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.092748   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.092763   27131 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:05:42.206064   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:05:42.206086   27131 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:05:42.206094   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.208351   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.208732   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.208750   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.208928   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.209072   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.209271   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.209444   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.209598   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.209769   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.209780   27131 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:05:42.323709   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:05:42.323865   27131 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:05:42.323878   27131 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:05:42.323888   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.324267   27131 buildroot.go:166] provisioning hostname "ha-844661-m03"
	I1105 18:05:42.324297   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.324481   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.327505   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.327833   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.327862   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.328041   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.328248   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.328422   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.328544   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.328776   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.329027   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.329041   27131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661-m03 && echo "ha-844661-m03" | sudo tee /etc/hostname
	I1105 18:05:42.457338   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661-m03
	
	I1105 18:05:42.457368   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.460928   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.461292   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.461321   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.461510   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.461681   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.461835   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.461969   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.462135   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:42.462324   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:42.462348   27131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:05:42.583532   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:05:42.583564   27131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:05:42.583578   27131 buildroot.go:174] setting up certificates
	I1105 18:05:42.583593   27131 provision.go:84] configureAuth start
	I1105 18:05:42.583602   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetMachineName
	I1105 18:05:42.583890   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:42.586719   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.587067   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.587099   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.587290   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.589736   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.590192   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.590227   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.590360   27131 provision.go:143] copyHostCerts
	I1105 18:05:42.590408   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:05:42.590449   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:05:42.590459   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:05:42.590538   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:05:42.590622   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:05:42.590645   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:05:42.590652   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:05:42.590675   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:05:42.590726   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:05:42.590742   27131 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:05:42.590748   27131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:05:42.590768   27131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:05:42.590820   27131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661-m03 san=[127.0.0.1 192.168.39.52 ha-844661-m03 localhost minikube]
	I1105 18:05:42.925752   27131 provision.go:177] copyRemoteCerts
	I1105 18:05:42.925808   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:05:42.925833   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:42.928689   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.929066   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:42.929101   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:42.929303   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:42.929489   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:42.929666   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:42.929803   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.020278   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:05:43.020356   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:05:43.044012   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:05:43.044085   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:05:43.067535   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:05:43.067615   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:05:43.091055   27131 provision.go:87] duration metric: took 507.451446ms to configureAuth
	I1105 18:05:43.091084   27131 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:05:43.091353   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:43.091482   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.094765   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.095169   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.095193   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.095384   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.095574   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.095740   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.095896   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.096067   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:43.096263   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:43.096284   27131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:05:43.325666   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:05:43.325693   27131 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:05:43.325711   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetURL
	I1105 18:05:43.326946   27131 main.go:141] libmachine: (ha-844661-m03) DBG | Using libvirt version 6000000
	I1105 18:05:43.329691   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.330121   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.330146   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.330327   27131 main.go:141] libmachine: Docker is up and running!
	I1105 18:05:43.330347   27131 main.go:141] libmachine: Reticulating splines...
	I1105 18:05:43.330356   27131 client.go:171] duration metric: took 21.769352405s to LocalClient.Create
	I1105 18:05:43.330393   27131 start.go:167] duration metric: took 21.769425686s to libmachine.API.Create "ha-844661"
	I1105 18:05:43.330407   27131 start.go:293] postStartSetup for "ha-844661-m03" (driver="kvm2")
	I1105 18:05:43.330422   27131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:05:43.330439   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.330671   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:05:43.330693   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.332887   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.333189   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.333218   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.333427   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.333597   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.333764   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.333891   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.421747   27131 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:05:43.425946   27131 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:05:43.425980   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:05:43.426048   27131 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:05:43.426118   27131 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:05:43.426127   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:05:43.426241   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:05:43.436295   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:05:43.461822   27131 start.go:296] duration metric: took 131.400624ms for postStartSetup
	I1105 18:05:43.461911   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetConfigRaw
	I1105 18:05:43.462559   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:43.465039   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.465395   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.465419   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.465660   27131 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:05:43.465861   27131 start.go:128] duration metric: took 21.923641121s to createHost
	I1105 18:05:43.465891   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.468236   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.468751   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.468776   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.468993   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.469148   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.469288   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.469410   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.469542   27131 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:43.469719   27131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I1105 18:05:43.469729   27131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:05:43.583301   27131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829943.559053309
	
	I1105 18:05:43.583330   27131 fix.go:216] guest clock: 1730829943.559053309
	I1105 18:05:43.583338   27131 fix.go:229] Guest: 2024-11-05 18:05:43.559053309 +0000 UTC Remote: 2024-11-05 18:05:43.465876826 +0000 UTC m=+142.850569806 (delta=93.176483ms)
	I1105 18:05:43.583357   27131 fix.go:200] guest clock delta is within tolerance: 93.176483ms
	I1105 18:05:43.583365   27131 start.go:83] releasing machines lock for "ha-844661-m03", held for 22.041249603s
	I1105 18:05:43.583392   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.583670   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:43.586387   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.586835   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.586865   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.589174   27131 out.go:177] * Found network options:
	I1105 18:05:43.590513   27131 out.go:177]   - NO_PROXY=192.168.39.48,192.168.39.38
	W1105 18:05:43.591696   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:05:43.591728   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:05:43.591742   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592264   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592439   27131 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:05:43.592540   27131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:05:43.592583   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	W1105 18:05:43.592659   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:05:43.592686   27131 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:05:43.592773   27131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:05:43.592798   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:05:43.595358   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595711   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.595743   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595763   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.595936   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.596109   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.596235   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:43.596238   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.596260   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:43.596402   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:05:43.596401   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.596521   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:05:43.596667   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:05:43.596795   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:05:43.836071   27131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:05:43.841664   27131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:05:43.841742   27131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:05:43.858022   27131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:05:43.858050   27131 start.go:495] detecting cgroup driver to use...
	I1105 18:05:43.858129   27131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:05:43.874613   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:05:43.888461   27131 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:05:43.888526   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:05:43.901586   27131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:05:43.914516   27131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:05:44.022716   27131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:05:44.162802   27131 docker.go:233] disabling docker service ...
	I1105 18:05:44.162867   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:05:44.178520   27131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:05:44.190518   27131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:05:44.307326   27131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:05:44.440411   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:05:44.453238   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:05:44.471519   27131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:05:44.471573   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.481424   27131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:05:44.481492   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.491154   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.500794   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.511947   27131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:05:44.521660   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.531075   27131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.547126   27131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:05:44.557037   27131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:05:44.565707   27131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:05:44.565772   27131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:05:44.580225   27131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:05:44.590720   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:05:44.720733   27131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:05:44.813635   27131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:05:44.813712   27131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:05:44.818398   27131 start.go:563] Will wait 60s for crictl version
	I1105 18:05:44.818453   27131 ssh_runner.go:195] Run: which crictl
	I1105 18:05:44.821924   27131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:05:44.862340   27131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:05:44.862414   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:05:44.888088   27131 ssh_runner.go:195] Run: crio --version
	I1105 18:05:44.915450   27131 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:05:44.916959   27131 out.go:177]   - env NO_PROXY=192.168.39.48
	I1105 18:05:44.918290   27131 out.go:177]   - env NO_PROXY=192.168.39.48,192.168.39.38
	I1105 18:05:44.919504   27131 main.go:141] libmachine: (ha-844661-m03) Calling .GetIP
	I1105 18:05:44.921870   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:44.922342   27131 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:05:44.922369   27131 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:05:44.922579   27131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:05:44.926550   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:05:44.938321   27131 mustload.go:65] Loading cluster: ha-844661
	I1105 18:05:44.938602   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:44.939019   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:44.939070   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:44.954536   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45821
	I1105 18:05:44.955060   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:44.955556   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:44.955581   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:44.955872   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:44.956050   27131 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:05:44.957611   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:05:44.957920   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:44.957971   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:44.973679   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33387
	I1105 18:05:44.974166   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:44.974646   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:44.974660   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:44.974951   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:44.975198   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:05:44.975390   27131 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.52
	I1105 18:05:44.975402   27131 certs.go:194] generating shared ca certs ...
	I1105 18:05:44.975424   27131 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:44.975543   27131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:05:44.975579   27131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:05:44.975587   27131 certs.go:256] generating profile certs ...
	I1105 18:05:44.975659   27131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:05:44.975685   27131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b
	I1105 18:05:44.975700   27131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.52 192.168.39.254]
	I1105 18:05:45.201266   27131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b ...
	I1105 18:05:45.201297   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b: {Name:mk528e0260fc30831e80a622836a2ff38ea38838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:45.201463   27131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b ...
	I1105 18:05:45.201476   27131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b: {Name:mkf6f5a9f3c5c5cd5e56be42a7f99d1f16c92ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:05:45.201544   27131 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.c62dff9b -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:05:45.201685   27131 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.c62dff9b -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:05:45.201845   27131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:05:45.201861   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:05:45.201877   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:05:45.201896   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:05:45.201914   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:05:45.201928   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:05:45.201942   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:05:45.201954   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:05:45.215059   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:05:45.215144   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:05:45.215186   27131 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:05:45.215194   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:05:45.215215   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:05:45.215240   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:05:45.215272   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:05:45.215314   27131 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:05:45.215350   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.215374   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.215398   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.215435   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:05:45.218425   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:45.218874   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:05:45.218901   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:45.219093   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:05:45.219284   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:05:45.219433   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:05:45.219544   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:05:45.291312   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:05:45.296113   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:05:45.309256   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:05:45.313268   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1105 18:05:45.324891   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:05:45.328601   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:05:45.339115   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:05:45.343326   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:05:45.353973   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:05:45.357652   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:05:45.367881   27131 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:05:45.371920   27131 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 18:05:45.381431   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:05:45.405521   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:05:45.428099   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:05:45.450896   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:05:45.472444   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1105 18:05:45.494567   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:05:45.518941   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:05:45.542679   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:05:45.565272   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:05:45.586847   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:05:45.609171   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:05:45.631071   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:05:45.647046   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1105 18:05:45.662643   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:05:45.677589   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:05:45.693263   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:05:45.708513   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 18:05:45.723904   27131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:05:45.739595   27131 ssh_runner.go:195] Run: openssl version
	I1105 18:05:45.744988   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:05:45.754754   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.759038   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.759097   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:05:45.764843   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:05:45.774526   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:05:45.784026   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.788019   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.788066   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:05:45.793328   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:05:45.803282   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:05:45.813203   27131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.817364   27131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.817407   27131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:05:45.822692   27131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:05:45.832731   27131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:05:45.836652   27131 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:05:45.836705   27131 kubeadm.go:934] updating node {m03 192.168.39.52 8443 v1.31.2 crio true true} ...
	I1105 18:05:45.836816   27131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:05:45.836851   27131 kube-vip.go:115] generating kube-vip config ...
	I1105 18:05:45.836896   27131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:05:45.851973   27131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:05:45.852033   27131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:05:45.852095   27131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:05:45.861565   27131 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 18:05:45.861624   27131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 18:05:45.871179   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1105 18:05:45.871192   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 18:05:45.871218   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:05:45.871230   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:05:45.871246   27131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1105 18:05:45.871262   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:05:45.871285   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 18:05:45.871314   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 18:05:45.885118   27131 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:05:45.885168   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 18:05:45.885198   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 18:05:45.885198   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 18:05:45.885201   27131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 18:05:45.885224   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 18:05:45.895722   27131 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 18:05:45.895762   27131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 18:05:46.776289   27131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:05:46.785676   27131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1105 18:05:46.804664   27131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:05:46.823256   27131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:05:46.839659   27131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:05:46.843739   27131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:05:46.855127   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:05:46.984151   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:05:47.002930   27131 host.go:66] Checking if "ha-844661" exists ...
	I1105 18:05:47.003372   27131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:05:47.003427   27131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:05:47.019365   27131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I1105 18:05:47.020121   27131 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:05:47.020574   27131 main.go:141] libmachine: Using API Version  1
	I1105 18:05:47.020595   27131 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:05:47.020908   27131 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:05:47.021095   27131 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:05:47.021355   27131 start.go:317] joinCluster: &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:05:47.021508   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 18:05:47.021529   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:05:47.024802   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:47.025266   27131 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:05:47.025301   27131 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:05:47.025485   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:05:47.025649   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:05:47.025818   27131 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:05:47.025989   27131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:05:47.187808   27131 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:05:47.187862   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ywlsrk.n1qe1uf11bwul667 --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m03 --control-plane --apiserver-advertise-address=192.168.39.52 --apiserver-bind-port=8443"
	I1105 18:06:08.756523   27131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ywlsrk.n1qe1uf11bwul667 --discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844661-m03 --control-plane --apiserver-advertise-address=192.168.39.52 --apiserver-bind-port=8443": (21.568638959s)
	I1105 18:06:08.756554   27131 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 18:06:09.321152   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844661-m03 minikube.k8s.io/updated_at=2024_11_05T18_06_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-844661 minikube.k8s.io/primary=false
	I1105 18:06:09.429932   27131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844661-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 18:06:09.553648   27131 start.go:319] duration metric: took 22.532294884s to joinCluster
	I1105 18:06:09.553756   27131 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:06:09.554141   27131 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:09.555396   27131 out.go:177] * Verifying Kubernetes components...
	I1105 18:06:09.556678   27131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:06:09.771512   27131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:06:09.788145   27131 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:06:09.788384   27131 kapi.go:59] client config for ha-844661: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:06:09.788445   27131 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.48:8443
	I1105 18:06:09.788700   27131 node_ready.go:35] waiting up to 6m0s for node "ha-844661-m03" to be "Ready" ...
	I1105 18:06:09.788799   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:09.788806   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:09.788814   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:09.788817   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:09.792219   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:10.289451   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:10.289477   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:10.289489   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:10.289494   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:10.292860   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:10.789577   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:10.789602   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:10.789615   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:10.789623   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:10.793572   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.289465   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:11.289484   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:11.289492   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:11.289498   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:11.292734   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.789023   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:11.789052   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:11.789064   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:11.789070   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:11.792248   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:11.792884   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:12.289577   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:12.289596   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:12.289604   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:12.289609   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:12.292931   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:12.789594   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:12.789615   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:12.789623   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:12.789628   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:12.793282   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.288880   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:13.288900   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:13.288909   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:13.288912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:13.292354   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.789203   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:13.789228   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:13.789240   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:13.789244   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:13.792591   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:13.793128   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:14.289574   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:14.289596   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:14.289605   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:14.289610   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:14.292856   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:14.789821   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:14.789847   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:14.789858   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:14.789863   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:14.793134   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.289398   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:15.289420   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:15.289428   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:15.289433   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:15.292967   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.789567   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:15.789591   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:15.789602   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:15.789607   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:15.793208   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:15.793657   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:16.289022   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:16.289046   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:16.289056   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:16.289062   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:16.309335   27131 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1105 18:06:16.789461   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:16.789479   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:16.789488   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:16.789492   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:16.793000   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:17.289308   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:17.289333   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:17.289345   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:17.289354   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:17.292729   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:17.789752   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:17.789779   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:17.789791   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:17.789798   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:17.794196   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:17.794657   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:18.288931   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:18.288964   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:18.288972   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:18.288976   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:18.292090   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:18.789058   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:18.789080   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:18.789086   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:18.789090   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:18.792559   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:19.289923   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:19.289950   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:19.289961   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:19.289966   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:19.293279   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:19.789125   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:19.789153   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:19.789164   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:19.789170   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:19.792732   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:20.289126   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:20.289149   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:20.289157   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:20.289162   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:20.292641   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:20.293309   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:20.789527   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:20.789549   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:20.789557   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:20.789561   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:20.792849   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:21.289833   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:21.289856   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:21.289863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:21.289867   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:21.293665   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:21.789877   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:21.789900   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:21.789908   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:21.789912   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:21.793341   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:22.289645   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:22.289664   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:22.289672   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:22.289676   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:22.292986   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:22.293503   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:22.789122   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:22.789148   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:22.789160   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:22.789164   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:22.792397   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:23.289550   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:23.289574   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:23.289584   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:23.289591   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:23.293009   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:23.789081   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:23.789104   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:23.789112   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:23.789116   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:23.792559   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:24.289408   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:24.289432   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:24.289444   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:24.289448   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:24.293655   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:24.294170   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:24.789552   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:24.789579   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:24.789592   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:24.789598   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:24.792779   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:25.289364   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:25.289386   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:25.289393   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:25.289398   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:25.293189   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:25.789622   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:25.789644   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:25.789652   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:25.789655   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:25.792920   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.288919   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:26.288944   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:26.288954   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:26.288961   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:26.292248   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.789720   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:26.789741   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:26.789749   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:26.789753   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:26.793339   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:26.793840   27131 node_ready.go:53] node "ha-844661-m03" has status "Ready":"False"
	I1105 18:06:27.289627   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:27.289653   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:27.289664   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:27.289671   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:27.292896   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:27.789396   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:27.789418   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:27.789426   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:27.789430   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:27.793104   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.288926   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.288950   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.288958   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.288962   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.292349   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.292934   27131 node_ready.go:49] node "ha-844661-m03" has status "Ready":"True"
	I1105 18:06:28.292959   27131 node_ready.go:38] duration metric: took 18.504244816s for node "ha-844661-m03" to be "Ready" ...
	I1105 18:06:28.292967   27131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:28.293052   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:28.293062   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.293069   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.293073   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.298865   27131 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:06:28.305101   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.305172   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4bdfz
	I1105 18:06:28.305180   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.305187   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.305191   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.308014   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.308823   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.308838   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.308845   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.308848   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.311202   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.311752   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.311769   27131 pod_ready.go:82] duration metric: took 6.646273ms for pod "coredns-7c65d6cfc9-4bdfz" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.311778   27131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.311825   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5g97
	I1105 18:06:28.311833   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.311839   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.311842   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.314162   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.315006   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.315022   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.315032   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.315037   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.317112   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.317771   27131 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.317790   27131 pod_ready.go:82] duration metric: took 6.006174ms for pod "coredns-7c65d6cfc9-s5g97" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.317799   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.317847   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661
	I1105 18:06:28.317855   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.317861   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.317869   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.320184   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.320779   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:28.320794   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.320801   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.320804   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.323022   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.323542   27131 pod_ready.go:93] pod "etcd-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.323560   27131 pod_ready.go:82] duration metric: took 5.754386ms for pod "etcd-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.323568   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.323613   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m02
	I1105 18:06:28.323621   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.323627   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.323631   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.325924   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.326482   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:28.326496   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.326503   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.326510   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.328928   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:28.329392   27131 pod_ready.go:93] pod "etcd-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.329412   27131 pod_ready.go:82] duration metric: took 5.837481ms for pod "etcd-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.329426   27131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.489824   27131 request.go:632] Waited for 160.309715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m03
	I1105 18:06:28.489893   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844661-m03
	I1105 18:06:28.489899   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.489906   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.489914   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.493239   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.689345   27131 request.go:632] Waited for 195.357359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.689416   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:28.689422   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.689430   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.689436   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.692948   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:28.693449   27131 pod_ready.go:93] pod "etcd-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:28.693468   27131 pod_ready.go:82] duration metric: took 364.031884ms for pod "etcd-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.693488   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:28.889759   27131 request.go:632] Waited for 196.181442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:06:28.889818   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661
	I1105 18:06:28.889823   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:28.889830   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:28.889836   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:28.893294   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.089232   27131 request.go:632] Waited for 195.272157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:29.089332   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:29.089345   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.089355   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.089363   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.092371   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:29.093062   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.093081   27131 pod_ready.go:82] duration metric: took 399.581249ms for pod "kube-apiserver-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.093095   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.289039   27131 request.go:632] Waited for 195.870378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:06:29.289108   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m02
	I1105 18:06:29.289114   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.289121   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.289127   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.292782   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.489337   27131 request.go:632] Waited for 195.348089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:29.489423   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:29.489428   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.489439   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.489446   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.492721   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.493290   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.493309   27131 pod_ready.go:82] duration metric: took 400.203815ms for pod "kube-apiserver-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.493320   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.689371   27131 request.go:632] Waited for 195.98498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m03
	I1105 18:06:29.689467   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844661-m03
	I1105 18:06:29.689479   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.689489   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.689497   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.692955   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:29.888986   27131 request.go:632] Waited for 195.295088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:29.889053   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:29.889060   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:29.889071   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.889080   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:29.892048   27131 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:29.892533   27131 pod_ready.go:93] pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:29.892549   27131 pod_ready.go:82] duration metric: took 399.221552ms for pod "kube-apiserver-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:29.892559   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.089669   27131 request.go:632] Waited for 197.039051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:06:30.089731   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661
	I1105 18:06:30.089736   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.089745   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.089749   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.093164   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.289306   27131 request.go:632] Waited for 195.324188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:30.289372   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:30.289384   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.289397   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.289407   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.292636   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.293206   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:30.293227   27131 pod_ready.go:82] duration metric: took 400.66121ms for pod "kube-controller-manager-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.293238   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.489536   27131 request.go:632] Waited for 196.217205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:06:30.489646   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m02
	I1105 18:06:30.489658   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.489668   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.489675   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.493045   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.688919   27131 request.go:632] Waited for 195.135908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:30.688971   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:30.688976   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.688984   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.688988   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.692203   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:30.692968   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:30.692987   27131 pod_ready.go:82] duration metric: took 399.741193ms for pod "kube-controller-manager-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.693001   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:30.889370   27131 request.go:632] Waited for 196.304824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m03
	I1105 18:06:30.889450   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844661-m03
	I1105 18:06:30.889457   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:30.889465   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:30.889472   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:30.892647   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.089803   27131 request.go:632] Waited for 196.376037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.089851   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.089855   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.089863   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.089869   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.093035   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.093548   27131 pod_ready.go:93] pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.093568   27131 pod_ready.go:82] duration metric: took 400.558908ms for pod "kube-controller-manager-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.093580   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mk9m" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.289696   27131 request.go:632] Waited for 196.055175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mk9m
	I1105 18:06:31.289756   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mk9m
	I1105 18:06:31.289761   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.289768   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.289772   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.293304   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.489478   27131 request.go:632] Waited for 195.351968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.489541   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:31.489549   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.489556   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.489562   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.492991   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.493563   27131 pod_ready.go:93] pod "kube-proxy-2mk9m" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.493582   27131 pod_ready.go:82] duration metric: took 399.995731ms for pod "kube-proxy-2mk9m" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.493592   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.689978   27131 request.go:632] Waited for 196.300604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:06:31.690038   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjpkh
	I1105 18:06:31.690043   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.690050   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.690053   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.693380   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.889851   27131 request.go:632] Waited for 195.375559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:31.889905   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:31.889910   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:31.889917   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:31.889922   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:31.893474   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:31.894113   27131 pod_ready.go:93] pod "kube-proxy-pjpkh" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:31.894132   27131 pod_ready.go:82] duration metric: took 400.533639ms for pod "kube-proxy-pjpkh" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:31.894142   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.089665   27131 request.go:632] Waited for 195.450073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:06:32.089735   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zsbfs
	I1105 18:06:32.089740   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.089747   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.089751   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.093190   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.289235   27131 request.go:632] Waited for 195.339549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:32.289293   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:32.289310   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.289317   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.289321   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.292485   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.293147   27131 pod_ready.go:93] pod "kube-proxy-zsbfs" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:32.293172   27131 pod_ready.go:82] duration metric: took 399.02399ms for pod "kube-proxy-zsbfs" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.293182   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.489243   27131 request.go:632] Waited for 195.995375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:06:32.489308   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661
	I1105 18:06:32.489316   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.489324   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.489327   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.493003   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.689901   27131 request.go:632] Waited for 196.356448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:32.689953   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661
	I1105 18:06:32.689958   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.689966   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.689970   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.693190   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:32.693742   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:32.693763   27131 pod_ready.go:82] duration metric: took 400.573652ms for pod "kube-scheduler-ha-844661" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.693777   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:32.889556   27131 request.go:632] Waited for 195.689425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:06:32.889607   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m02
	I1105 18:06:32.889612   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:32.889620   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:32.889624   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:32.893476   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.089475   27131 request.go:632] Waited for 195.357977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:33.089527   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m02
	I1105 18:06:33.089532   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.089539   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.089543   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.092888   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.093460   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:33.093481   27131 pod_ready.go:82] duration metric: took 399.697128ms for pod "kube-scheduler-ha-844661-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.093491   27131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.289500   27131 request.go:632] Waited for 195.942997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m03
	I1105 18:06:33.289569   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844661-m03
	I1105 18:06:33.289576   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.289585   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.289589   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.293636   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:33.489851   27131 request.go:632] Waited for 195.367744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:33.489908   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes/ha-844661-m03
	I1105 18:06:33.489913   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.489920   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.489924   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.493512   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:33.494235   27131 pod_ready.go:93] pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:33.494258   27131 pod_ready.go:82] duration metric: took 400.759685ms for pod "kube-scheduler-ha-844661-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:33.494276   27131 pod_ready.go:39] duration metric: took 5.201298893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:33.494295   27131 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:06:33.494356   27131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:06:33.509380   27131 api_server.go:72] duration metric: took 23.955584698s to wait for apiserver process to appear ...
	I1105 18:06:33.509409   27131 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:06:33.509433   27131 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1105 18:06:33.514022   27131 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1105 18:06:33.514097   27131 round_trippers.go:463] GET https://192.168.39.48:8443/version
	I1105 18:06:33.514107   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.514114   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.514119   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.514958   27131 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 18:06:33.515041   27131 api_server.go:141] control plane version: v1.31.2
	I1105 18:06:33.515056   27131 api_server.go:131] duration metric: took 5.640397ms to wait for apiserver health ...
	I1105 18:06:33.515062   27131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:06:33.689459   27131 request.go:632] Waited for 174.322152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:33.689543   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:33.689554   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.689564   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.689570   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.695696   27131 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:06:33.701785   27131 system_pods.go:59] 24 kube-system pods found
	I1105 18:06:33.701817   27131 system_pods.go:61] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:06:33.701822   27131 system_pods.go:61] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:06:33.701826   27131 system_pods.go:61] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:06:33.701829   27131 system_pods.go:61] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:06:33.701832   27131 system_pods.go:61] "etcd-ha-844661-m03" [c8179289-e67f-4a2b-bba3-1387aa107d3e] Running
	I1105 18:06:33.701836   27131 system_pods.go:61] "kindnet-fzrh6" [985ef0b3-91cc-4965-a1f3-a8e468eba2ee] Running
	I1105 18:06:33.701839   27131 system_pods.go:61] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:06:33.701842   27131 system_pods.go:61] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:06:33.701845   27131 system_pods.go:61] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:06:33.701849   27131 system_pods.go:61] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:06:33.701852   27131 system_pods.go:61] "kube-apiserver-ha-844661-m03" [57a94b5d-466e-4d87-ba16-ceba58d65ee0] Running
	I1105 18:06:33.701858   27131 system_pods.go:61] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:06:33.701864   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:06:33.701868   27131 system_pods.go:61] "kube-controller-manager-ha-844661-m03" [dcadcdf5-6004-49a9-800b-f27798ab06db] Running
	I1105 18:06:33.701872   27131 system_pods.go:61] "kube-proxy-2mk9m" [483f248e-9776-4c11-a088-a2cbd152602b] Running
	I1105 18:06:33.701875   27131 system_pods.go:61] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:06:33.701879   27131 system_pods.go:61] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:06:33.701882   27131 system_pods.go:61] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:06:33.701886   27131 system_pods.go:61] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:06:33.701889   27131 system_pods.go:61] "kube-scheduler-ha-844661-m03" [711f353f-ee82-4066-98ff-e3349082bf32] Running
	I1105 18:06:33.701894   27131 system_pods.go:61] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:06:33.701897   27131 system_pods.go:61] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:06:33.701900   27131 system_pods.go:61] "kube-vip-ha-844661-m03" [5ebe3d8b-e1e2-4d10-bf5c-d88148144dd1] Running
	I1105 18:06:33.701903   27131 system_pods.go:61] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:06:33.701909   27131 system_pods.go:74] duration metric: took 186.841773ms to wait for pod list to return data ...
	I1105 18:06:33.701919   27131 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:06:33.889363   27131 request.go:632] Waited for 187.358199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:06:33.889435   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:06:33.889442   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:33.889452   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:33.889459   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:33.893683   27131 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:33.893791   27131 default_sa.go:45] found service account: "default"
	I1105 18:06:33.893804   27131 default_sa.go:55] duration metric: took 191.879725ms for default service account to be created ...
	I1105 18:06:33.893811   27131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:06:34.089215   27131 request.go:632] Waited for 195.345636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:34.089283   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:34.089291   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:34.089303   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:34.089323   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:34.096363   27131 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:06:34.102465   27131 system_pods.go:86] 24 kube-system pods found
	I1105 18:06:34.102491   27131 system_pods.go:89] "coredns-7c65d6cfc9-4bdfz" [f72d7ed7-deda-4f95-b570-9335e8a6c2e3] Running
	I1105 18:06:34.102496   27131 system_pods.go:89] "coredns-7c65d6cfc9-s5g97" [494c4170-6c68-4f3d-be3a-219a9c215ca3] Running
	I1105 18:06:34.102501   27131 system_pods.go:89] "etcd-ha-844661" [27fa9727-8088-4b04-be37-018051089dfa] Running
	I1105 18:06:34.102505   27131 system_pods.go:89] "etcd-ha-844661-m02" [a41010e2-9b13-49ab-9f7f-288590dadcbe] Running
	I1105 18:06:34.102508   27131 system_pods.go:89] "etcd-ha-844661-m03" [c8179289-e67f-4a2b-bba3-1387aa107d3e] Running
	I1105 18:06:34.102512   27131 system_pods.go:89] "kindnet-fzrh6" [985ef0b3-91cc-4965-a1f3-a8e468eba2ee] Running
	I1105 18:06:34.102515   27131 system_pods.go:89] "kindnet-q898d" [c29b6703-c461-40c7-b246-2e65cc84f893] Running
	I1105 18:06:34.102519   27131 system_pods.go:89] "kindnet-vz22j" [cd4af99a-51ed-45ff-afa2-477f95f92f94] Running
	I1105 18:06:34.102522   27131 system_pods.go:89] "kube-apiserver-ha-844661" [cbe51bd3-30fa-4afa-8605-fb40a13a6cc6] Running
	I1105 18:06:34.102525   27131 system_pods.go:89] "kube-apiserver-ha-844661-m02" [be91dae0-ebd5-4e25-a5a6-a560eaa01290] Running
	I1105 18:06:34.102529   27131 system_pods.go:89] "kube-apiserver-ha-844661-m03" [57a94b5d-466e-4d87-ba16-ceba58d65ee0] Running
	I1105 18:06:34.102533   27131 system_pods.go:89] "kube-controller-manager-ha-844661" [98d225ad-16a4-44a6-b1e6-7d81aa218bc6] Running
	I1105 18:06:34.102537   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m02" [fa2faae2-0519-41fb-908b-8c7f0f783d93] Running
	I1105 18:06:34.102541   27131 system_pods.go:89] "kube-controller-manager-ha-844661-m03" [dcadcdf5-6004-49a9-800b-f27798ab06db] Running
	I1105 18:06:34.102545   27131 system_pods.go:89] "kube-proxy-2mk9m" [483f248e-9776-4c11-a088-a2cbd152602b] Running
	I1105 18:06:34.102551   27131 system_pods.go:89] "kube-proxy-pjpkh" [f65172ee-171e-49d0-a948-e0dc11a45d03] Running
	I1105 18:06:34.102554   27131 system_pods.go:89] "kube-proxy-zsbfs" [3b55d473-b4e4-4c13-bc55-01ffb2a9768f] Running
	I1105 18:06:34.102557   27131 system_pods.go:89] "kube-scheduler-ha-844661" [b59fc044-84ea-4c1e-bc3b-e4cd242c63dd] Running
	I1105 18:06:34.102561   27131 system_pods.go:89] "kube-scheduler-ha-844661-m02" [b1aebfce-52b2-408c-a3eb-87a2ad9f23d1] Running
	I1105 18:06:34.102564   27131 system_pods.go:89] "kube-scheduler-ha-844661-m03" [711f353f-ee82-4066-98ff-e3349082bf32] Running
	I1105 18:06:34.102569   27131 system_pods.go:89] "kube-vip-ha-844661" [d5079076-90b0-4d82-b069-ee46de2b92a8] Running
	I1105 18:06:34.102573   27131 system_pods.go:89] "kube-vip-ha-844661-m02" [030d7a08-f2d6-4831-aa2a-e5eadf1dd1ae] Running
	I1105 18:06:34.102578   27131 system_pods.go:89] "kube-vip-ha-844661-m03" [5ebe3d8b-e1e2-4d10-bf5c-d88148144dd1] Running
	I1105 18:06:34.102581   27131 system_pods.go:89] "storage-provisioner" [1195203c-6407-4299-8a5e-afb85f4ba83e] Running
	I1105 18:06:34.102586   27131 system_pods.go:126] duration metric: took 208.77013ms to wait for k8s-apps to be running ...
	I1105 18:06:34.102595   27131 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:06:34.102637   27131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:06:34.118557   27131 system_svc.go:56] duration metric: took 15.951864ms WaitForService to wait for kubelet
	I1105 18:06:34.118583   27131 kubeadm.go:582] duration metric: took 24.564791625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:06:34.118612   27131 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:06:34.288972   27131 request.go:632] Waited for 170.274451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes
	I1105 18:06:34.289022   27131 round_trippers.go:463] GET https://192.168.39.48:8443/api/v1/nodes
	I1105 18:06:34.289035   27131 round_trippers.go:469] Request Headers:
	I1105 18:06:34.289055   27131 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:34.289062   27131 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1105 18:06:34.292646   27131 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:34.294249   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294283   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294309   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294316   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294322   27131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:06:34.294327   27131 node_conditions.go:123] node cpu capacity is 2
	I1105 18:06:34.294335   27131 node_conditions.go:105] duration metric: took 175.714114ms to run NodePressure ...
	I1105 18:06:34.294352   27131 start.go:241] waiting for startup goroutines ...
	I1105 18:06:34.294390   27131 start.go:255] writing updated cluster config ...
	I1105 18:06:34.294711   27131 ssh_runner.go:195] Run: rm -f paused
	I1105 18:06:34.347073   27131 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 18:06:34.348891   27131 out.go:177] * Done! kubectl is now configured to use "ha-844661" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.949576155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830231949553597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be23cd6b-a300-4de3-bb1b-ab99bc038e8d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.950519346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f775852-d958-40da-a510-664c12721743 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.950565528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f775852-d958-40da-a510-664c12721743 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.950772839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f775852-d958-40da-a510-664c12721743 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.988348376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d768f9d8-2676-4778-b5a9-ea4876422627 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.988418711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d768f9d8-2676-4778-b5a9-ea4876422627 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.991585040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfde899e-b854-4239-bd99-bbbc97285633 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.994396839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830231994356407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfde899e-b854-4239-bd99-bbbc97285633 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.995874015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9d63dff-330a-4516-a8c2-37a4f85ae91b name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.995945521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9d63dff-330a-4516-a8c2-37a4f85ae91b name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:31 ha-844661 crio[658]: time="2024-11-05 18:10:31.997888643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9d63dff-330a-4516-a8c2-37a4f85ae91b name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.044376548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf931e6b-3eee-4a48-b7a5-b9325e733cf5 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.044448722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf931e6b-3eee-4a48-b7a5-b9325e733cf5 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.045973478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5337903-f5d0-456d-b90b-571f3c755ca4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.046614700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830232046589804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5337903-f5d0-456d-b90b-571f3c755ca4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.047139668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7eec7e6e-aabb-4f55-ab8f-eaecdd1904a4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.047287190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7eec7e6e-aabb-4f55-ab8f-eaecdd1904a4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.047553945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7eec7e6e-aabb-4f55-ab8f-eaecdd1904a4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.084289544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9316768-9c46-457c-b183-f9a15ccfedd1 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.084368670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9316768-9c46-457c-b183-f9a15ccfedd1 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.085726502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebfdd74f-c88b-4c42-af0f-92adc61e7c21 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.086311017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830232086153886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebfdd74f-c88b-4c42-af0f-92adc61e7c21 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.086813799Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9372710-338c-4ce3-a277-59c9bb5eec34 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.086878340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9372710-338c-4ce3-a277-59c9bb5eec34 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:10:32 ha-844661 crio[658]: time="2024-11-05 18:10:32.087115733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f547082b18e2257ebfcb6275ad4d559f55a5ab0d85c07013a1f6b02c657f5370,PodSandboxId:27e18ae2427038fc19ab6ccd24251b2f6e675aeb0ecab9080e925dab355e1634,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730829998843146848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lzhpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8687b103-4a1a-4529-9efd-46405325fb04,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8,PodSandboxId:7b8c6b865e4b8c875c73234926d513c61ae9d0af0052ccd55c72d509b61bf057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859757081843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bdfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72d7ed7-deda-4f95-b570-9335e8a6c2e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a,PodSandboxId:44bedf8a84dbf80bd42566b6809ec9cdb3dbf7fdd00ad7e9c7257b4db84c2fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730829859701059292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-s5g97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
494c4170-6c68-4f3d-be3a-219a9c215ca3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:258fd7ae936264ab5f391c63a74bcfca43eaac9667fa55d2f1ae2bf69d86f506,PodSandboxId:b59a04159a4fb37feeb6c7207a4c2c01cfa3b88859cffc242dce30465f5090b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730829859628413447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1195203c-6407-4299-8a5e-afb85f4ba83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf,PodSandboxId:565a0867a4a3a353a69242e08c4709d453cf8c26c49ce19bd3bfae1941f65b93,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1730829847992068406,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vz22j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd4af99a-51ed-45ff-afa2-477f95f92f94,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6,PodSandboxId:a2589ca7aa1a58e73411ba6dc87f03e3b7ac2cf83e205847a09217d3f7e098cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730829843
323355886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjpkh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65172ee-171e-49d0-a948-e0dc11a45d03,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0,PodSandboxId:229c492a7d447a579e5ce9526890eba68292753e9f52bebefda4d703514de2fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:baf03d14a86fd8b76eec52962d123f536bbaa90993ae52038c69f1ef2a5a2b1a,State:CONTAINER_RUNNING,CreatedAt:173082983559
1750157,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f723db504111a23b6c2beed2ef0c2d34,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab,PodSandboxId:45ce87c5b9a86cfefee5f19ca1c813688dcb5c5657b73941bc02bee14d7d1989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730829832322076405,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c499dd53e4529ab109b4ca78f827f91,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc,PodSandboxId:da4d3442917c5cc34bb58f49ada92a08e4305f8ce6d2f2c3198593de28362d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730829832341956360,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 865a3de76fd67c015b14fb1b8eb9c1a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f,PodSandboxId:c3cdeb3fb2bc97cddc1fbff282af5d12685938172760e33f12994527fac67225,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730829832297411944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8d73a0ed0995239471e48a5b2bf8e2,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c,PodSandboxId:8cfef6eeee31d2cabf8f3fdcaaad6df80177a1af2802a97a5b1c2c468f1301ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730829832296094713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-844661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2776b91c9b4cc8368e803119651dea72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9372710-338c-4ce3-a277-59c9bb5eec34 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f547082b18e22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   27e18ae242703       busybox-7dff88458-lzhpc
	4504233c88e52       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   7b8c6b865e4b8       coredns-7c65d6cfc9-4bdfz
	2c9fc5d833b41       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   44bedf8a84dbf       coredns-7c65d6cfc9-s5g97
	258fd7ae93626       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   b59a04159a4fb       storage-provisioner
	bf77486744a30       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   565a0867a4a3a       kindnet-vz22j
	1c753c07805a4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   a2589ca7aa1a5       kube-proxy-pjpkh
	9fc3970511492       ghcr.io/kube-vip/kube-vip@sha256:1a97913f74e07b54caadffc6a377c0963a796bd605c6de9be2e03ff8cb76738f     6 minutes ago       Running             kube-vip                  0                   229c492a7d447       kube-vip-ha-844661
	f06b75f1a2501       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   da4d3442917c5       etcd-ha-844661
	695ba2636aaa9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   45ce87c5b9a86       kube-scheduler-ha-844661
	d6c4df0798539       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   c3cdeb3fb2bc9       kube-apiserver-ha-844661
	9fc529f9c17c8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   8cfef6eeee31d       kube-controller-manager-ha-844661
	
	
	==> coredns [2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a] <==
	[INFO] 10.244.3.2:48122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001817736s
	[INFO] 10.244.1.2:41485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154354s
	[INFO] 10.244.0.4:48696 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00417262s
	[INFO] 10.244.0.4:39724 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011241203s
	[INFO] 10.244.0.4:33801 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201157s
	[INFO] 10.244.3.2:59342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205557s
	[INFO] 10.244.3.2:38358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000335352s
	[INFO] 10.244.3.2:50220 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290051s
	[INFO] 10.244.1.2:42991 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002076706s
	[INFO] 10.244.1.2:38070 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182659s
	[INFO] 10.244.1.2:38061 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120824s
	[INFO] 10.244.0.4:55480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107684s
	[INFO] 10.244.3.2:54459 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094155s
	[INFO] 10.244.3.2:56770 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159318s
	[INFO] 10.244.1.2:46930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145588s
	[INFO] 10.244.1.2:51686 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000234893s
	[INFO] 10.244.1.2:43604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089852s
	[INFO] 10.244.0.4:59908 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00031712s
	[INFO] 10.244.3.2:36896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016445s
	[INFO] 10.244.3.2:35219 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306046s
	[INFO] 10.244.3.2:45286 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00016761s
	[INFO] 10.244.1.2:48376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000282486s
	[INFO] 10.244.1.2:44477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097938s
	[INFO] 10.244.1.2:51521 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175252s
	[INFO] 10.244.1.2:42468 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076611s
	
	
	==> coredns [4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8] <==
	[INFO] 10.244.0.4:38561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176278s
	[INFO] 10.244.0.4:47328 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000239279s
	[INFO] 10.244.0.4:37188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002005s
	[INFO] 10.244.0.4:40443 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116158s
	[INFO] 10.244.0.4:39770 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000216794s
	[INFO] 10.244.3.2:58499 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947267s
	[INFO] 10.244.3.2:50696 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001435907s
	[INFO] 10.244.3.2:53598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101366s
	[INFO] 10.244.3.2:40278 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021319s
	[INFO] 10.244.3.2:35533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073855s
	[INFO] 10.244.1.2:57627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215883s
	[INFO] 10.244.1.2:58558 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015092s
	[INFO] 10.244.1.2:44310 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409552s
	[INFO] 10.244.1.2:44445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145932s
	[INFO] 10.244.1.2:53561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124269s
	[INFO] 10.244.0.4:42872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279983s
	[INFO] 10.244.0.4:56987 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127988s
	[INFO] 10.244.0.4:36230 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209676s
	[INFO] 10.244.3.2:59508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020584s
	[INFO] 10.244.3.2:54542 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160368s
	[INFO] 10.244.1.2:52317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136132s
	[INFO] 10.244.0.4:56988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179513s
	[INFO] 10.244.0.4:39632 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000244979s
	[INFO] 10.244.0.4:60960 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110854s
	[INFO] 10.244.3.2:58476 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000304046s
	
	
	==> describe nodes <==
	Name:               ha-844661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T18_03_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:03:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:02 +0000   Tue, 05 Nov 2024 18:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-844661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee44951a983a4e549dbb04cb8a2493c9
	  System UUID:                ee44951a-983a-4e54-9dbb-04cb8a2493c9
	  Boot ID:                    4c65764c-54aa-465a-bc8a-8a5365b789a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lzhpc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 coredns-7c65d6cfc9-4bdfz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m30s
	  kube-system                 coredns-7c65d6cfc9-s5g97             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m30s
	  kube-system                 etcd-ha-844661                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m34s
	  kube-system                 kindnet-vz22j                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m30s
	  kube-system                 kube-apiserver-ha-844661             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-ha-844661    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-proxy-pjpkh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-scheduler-ha-844661             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-vip-ha-844661                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m28s  kube-proxy       
	  Normal  Starting                 6m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m34s  kubelet          Node ha-844661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s  kubelet          Node ha-844661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s  kubelet          Node ha-844661 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m31s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	  Normal  NodeReady                6m13s  kubelet          Node ha-844661 status is now: NodeReady
	  Normal  RegisteredNode           5m30s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	  Normal  RegisteredNode           4m18s  node-controller  Node ha-844661 event: Registered Node ha-844661 in Controller
	
	
	Name:               ha-844661-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_04_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:06:57 +0000   Tue, 05 Nov 2024 18:08:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    ha-844661-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75eddb8895b44c028e3869c19333df27
	  System UUID:                75eddb88-95b4-4c02-8e38-69c19333df27
	  Boot ID:                    703a3f97-42af-45ac-b300-e4714fc82ae4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vkchm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-844661-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m36s
	  kube-system                 kindnet-q898d                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m38s
	  kube-system                 kube-apiserver-ha-844661-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-ha-844661-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-zsbfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-scheduler-ha-844661-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-vip-ha-844661-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m33s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m38s                  cidrAllocator    Node ha-844661-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m38s (x8 over 5m38s)  kubelet          Node ha-844661-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s (x8 over 5m38s)  kubelet          Node ha-844661-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s (x7 over 5m38s)  kubelet          Node ha-844661-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-844661-m02 event: Registered Node ha-844661-m02 in Controller
	  Normal  NodeNotReady             2m3s                   node-controller  Node ha-844661-m02 status is now: NodeNotReady
	
	
	Name:               ha-844661-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_06_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:06:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:08 +0000   Tue, 05 Nov 2024 18:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    ha-844661-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eaab072d40e24724bda026ac82fdd308
	  System UUID:                eaab072d-40e2-4724-bda0-26ac82fdd308
	  Boot ID:                    db511fc0-c5d5-4348-8360-c6fc1b44808f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mwvv2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-844661-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m24s
	  kube-system                 kindnet-fzrh6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m26s
	  kube-system                 kube-apiserver-ha-844661-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-controller-manager-ha-844661-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-proxy-2mk9m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-scheduler-ha-844661-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-vip-ha-844661-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m26s                  cidrAllocator    Node ha-844661-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node ha-844661-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node ha-844661-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node ha-844661-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-844661-m03 event: Registered Node ha-844661-m03 in Controller
	
	
	Name:               ha-844661-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844661-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-844661
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_07_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844661-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:10:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:07:44 +0000   Tue, 05 Nov 2024 18:07:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-844661-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9adceb878ab74645bb56707a0ab9854e
	  System UUID:                9adceb87-8ab7-4645-bb56-707a0ab9854e
	  Boot ID:                    0b1794d4-8e9f-4a02-ba93-5010c0d8fbf7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7tcjz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m19s
	  kube-system                 kube-proxy-8bw6z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     3m19s                  cidrAllocator    Node ha-844661-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     3m19s                  cidrAllocator    Node ha-844661-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m19s (x2 over 3m19s)  kubelet          Node ha-844661-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m19s (x2 over 3m19s)  kubelet          Node ha-844661-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m19s (x2 over 3m19s)  kubelet          Node ha-844661-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-844661-m04 event: Registered Node ha-844661-m04 in Controller
	  Normal  NodeReady                2m59s                  kubelet          Node ha-844661-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 5 18:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051370] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036705] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.826003] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.830792] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.518259] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.512732] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.062769] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057746] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.181267] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.115768] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.273995] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.824232] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.167137] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.060834] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.275907] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.079756] kauditd_printk_skb: 79 callbacks suppressed
	[Nov 5 18:04] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.402917] kauditd_printk_skb: 32 callbacks suppressed
	[Nov 5 18:05] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc] <==
	{"level":"warn","ts":"2024-11-05T18:10:32.226886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.326331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.373420Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.380721Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.386462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.389930Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.391675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.400578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.403711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.406423Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.408102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.409030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.414691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.418082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.421102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.425733Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.462531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.468688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.475356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.479125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.482580Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.486303Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.502808Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.508770Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-11-05T18:10:32.526944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7a50af7ffd27cbe1","from":"7a50af7ffd27cbe1","remote-peer-id":"5c43d67a862496e6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:10:32 up 7 min,  0 users,  load average: 0.27, 0.40, 0.21
	Linux ha-844661 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf] <==
	I1105 18:09:58.980065       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	I1105 18:10:08.975320       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:10:08.975425       1 main.go:301] handling current node
	I1105 18:10:08.975448       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:10:08.975457       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:10:08.975728       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:10:08.975758       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:10:08.975910       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:10:08.975933       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	I1105 18:10:18.980134       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:10:18.980289       1 main.go:301] handling current node
	I1105 18:10:18.980325       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:10:18.980334       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:10:18.980658       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:10:18.980687       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:10:18.980836       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:10:18.980863       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	I1105 18:10:28.984475       1 main.go:297] Handling node with IPs: map[192.168.39.48:{}]
	I1105 18:10:28.984556       1 main.go:301] handling current node
	I1105 18:10:28.984576       1 main.go:297] Handling node with IPs: map[192.168.39.38:{}]
	I1105 18:10:28.984581       1 main.go:324] Node ha-844661-m02 has CIDR [10.244.1.0/24] 
	I1105 18:10:28.984835       1 main.go:297] Handling node with IPs: map[192.168.39.52:{}]
	I1105 18:10:28.984858       1 main.go:324] Node ha-844661-m03 has CIDR [10.244.3.0/24] 
	I1105 18:10:28.985004       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1105 18:10:28.985024       1 main.go:324] Node ha-844661-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f] <==
	W1105 18:03:56.787950       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.48]
	I1105 18:03:56.789794       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:03:56.795759       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:03:56.988233       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1105 18:03:58.574343       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1105 18:03:58.589042       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1105 18:03:58.611994       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1105 18:04:02.140726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1105 18:04:02.242563       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1105 18:06:39.847316       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39688: use of closed network connection
	E1105 18:06:40.021738       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39706: use of closed network connection
	E1105 18:06:40.204127       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39716: use of closed network connection
	E1105 18:06:40.398615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39728: use of closed network connection
	E1105 18:06:40.573865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39736: use of closed network connection
	E1105 18:06:40.752398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39760: use of closed network connection
	E1105 18:06:40.936783       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39766: use of closed network connection
	E1105 18:06:41.111519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39780: use of closed network connection
	E1105 18:06:41.286054       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39802: use of closed network connection
	E1105 18:06:41.573950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39826: use of closed network connection
	E1105 18:06:41.738524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39836: use of closed network connection
	E1105 18:06:41.904845       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39854: use of closed network connection
	E1105 18:06:42.073866       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39862: use of closed network connection
	E1105 18:06:42.246567       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39868: use of closed network connection
	E1105 18:06:42.411961       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39894: use of closed network connection
	W1105 18:08:06.801135       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.48 192.168.39.52]
	
	
	==> kube-controller-manager [9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c] <==
	E1105 18:07:13.653435       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-844661-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-844661-m04"
	E1105 18:07:13.653555       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-844661-m04': failed to patch node CIDR: Node \"ha-844661-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1105 18:07:13.653638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:13.659637       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:13.797662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:14.149565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:14.559123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:16.780529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:16.780718       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-844661-m04"
	I1105 18:07:16.994375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:17.944364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:18.017747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:23.969145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:33.222978       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844661-m04"
	I1105 18:07:33.223667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:33.239449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:34.533989       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:07:44.277626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m04"
	I1105 18:08:29.557990       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844661-m04"
	I1105 18:08:29.558983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:29.585475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:29.697679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.853166ms"
	I1105 18:08:29.699962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.926µs"
	I1105 18:08:31.887524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	I1105 18:08:34.788426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-844661-m02"
	
	
	==> kube-proxy [1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:04:03.571824       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:04:03.590655       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E1105 18:04:03.590765       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:04:03.621086       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:04:03.621144       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:04:03.621208       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:04:03.623505       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:04:03.623772       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:04:03.623783       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:04:03.625873       1 config.go:199] "Starting service config controller"
	I1105 18:04:03.625922       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:04:03.625956       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:04:03.625972       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:04:03.628076       1 config.go:328] "Starting node config controller"
	I1105 18:04:03.628108       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:04:03.726043       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:04:03.726043       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:04:03.728252       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab] <==
	E1105 18:03:56.072125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.276682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 18:03:56.276737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.329770       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 18:03:56.329820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:03:56.398642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 18:03:56.398687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1105 18:03:57.639067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 18:06:35.211549       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9e352dc6-ed87-4112-85c5-a76c00a8912f" pod="default/busybox-7dff88458-vkchm" assumedNode="ha-844661-m02" currentNode="ha-844661-m03"
	E1105 18:06:35.223911       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vkchm\": pod busybox-7dff88458-vkchm is already assigned to node \"ha-844661-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vkchm" node="ha-844661-m03"
	E1105 18:06:35.226313       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9e352dc6-ed87-4112-85c5-a76c00a8912f(default/busybox-7dff88458-vkchm) was assumed on ha-844661-m03 but assigned to ha-844661-m02" pod="default/busybox-7dff88458-vkchm"
	E1105 18:06:35.226429       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vkchm\": pod busybox-7dff88458-vkchm is already assigned to node \"ha-844661-m02\"" pod="default/busybox-7dff88458-vkchm"
	I1105 18:06:35.226528       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vkchm" node="ha-844661-m02"
	E1105 18:06:35.274759       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lzhpc\": pod busybox-7dff88458-lzhpc is already assigned to node \"ha-844661\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lzhpc" node="ha-844661"
	E1105 18:06:35.275967       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8687b103-4a1a-4529-9efd-46405325fb04(default/busybox-7dff88458-lzhpc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lzhpc"
	E1105 18:06:35.276226       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lzhpc\": pod busybox-7dff88458-lzhpc is already assigned to node \"ha-844661\"" pod="default/busybox-7dff88458-lzhpc"
	I1105 18:06:35.276363       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lzhpc" node="ha-844661"
	E1105 18:07:13.665747       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tfzng\": pod kube-proxy-tfzng is already assigned to node \"ha-844661-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tfzng" node="ha-844661-m04"
	E1105 18:07:13.665825       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f52b30f-7446-45ac-bb36-73398ffbfbc2(kube-system/kube-proxy-tfzng) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tfzng"
	E1105 18:07:13.665842       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tfzng\": pod kube-proxy-tfzng is already assigned to node \"ha-844661-m04\"" pod="kube-system/kube-proxy-tfzng"
	I1105 18:07:13.665872       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tfzng" node="ha-844661-m04"
	E1105 18:07:13.666212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vjq6v\": pod kindnet-vjq6v is already assigned to node \"ha-844661-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vjq6v" node="ha-844661-m04"
	E1105 18:07:13.666376       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d9f2bfec-eb1f-4373-bf3a-414ed6c8a630(kube-system/kindnet-vjq6v) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vjq6v"
	E1105 18:07:13.666420       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vjq6v\": pod kindnet-vjq6v is already assigned to node \"ha-844661-m04\"" pod="kube-system/kindnet-vjq6v"
	I1105 18:07:13.666453       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vjq6v" node="ha-844661-m04"
	
	
	==> kubelet <==
	Nov 05 18:08:58 ha-844661 kubelet[1296]: E1105 18:08:58.595270    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830138594734384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:58 ha-844661 kubelet[1296]: E1105 18:08:58.595295    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830138594734384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:08 ha-844661 kubelet[1296]: E1105 18:09:08.597057    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830148596755320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:08 ha-844661 kubelet[1296]: E1105 18:09:08.597097    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830148596755320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:18 ha-844661 kubelet[1296]: E1105 18:09:18.599471    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830158599122023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:18 ha-844661 kubelet[1296]: E1105 18:09:18.599506    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830158599122023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:28 ha-844661 kubelet[1296]: E1105 18:09:28.601448    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830168600902243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:28 ha-844661 kubelet[1296]: E1105 18:09:28.601554    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830168600902243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:38 ha-844661 kubelet[1296]: E1105 18:09:38.606338    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830178605104359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:38 ha-844661 kubelet[1296]: E1105 18:09:38.606359    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830178605104359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:48 ha-844661 kubelet[1296]: E1105 18:09:48.608274    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830188607885225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:48 ha-844661 kubelet[1296]: E1105 18:09:48.608666    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830188607885225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.519242    1296 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:09:58 ha-844661 kubelet[1296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:09:58 ha-844661 kubelet[1296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.611279    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830198610818845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:09:58 ha-844661 kubelet[1296]: E1105 18:09:58.611302    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830198610818845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:08 ha-844661 kubelet[1296]: E1105 18:10:08.613551    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830208612853413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:08 ha-844661 kubelet[1296]: E1105 18:10:08.613956    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830208612853413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:18 ha-844661 kubelet[1296]: E1105 18:10:18.616403    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830218615829286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:18 ha-844661 kubelet[1296]: E1105 18:10:18.616436    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830218615829286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:28 ha-844661 kubelet[1296]: E1105 18:10:28.617971    1296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830228617604942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:10:28 ha-844661 kubelet[1296]: E1105 18:10:28.618435    1296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830228617604942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844661 -n ha-844661
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (377.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-844661 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-844661 -v=7 --alsologtostderr
E1105 18:12:31.418842   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-844661 -v=7 --alsologtostderr: exit status 82 (2m2.69414268s)

                                                
                                                
-- stdout --
	* Stopping node "ha-844661-m04"  ...
	* Stopping node "ha-844661-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:10:33.566864   32357 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:10:33.567005   32357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:10:33.567014   32357 out.go:358] Setting ErrFile to fd 2...
	I1105 18:10:33.567018   32357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:10:33.567183   32357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:10:33.567391   32357 out.go:352] Setting JSON to false
	I1105 18:10:33.567480   32357 mustload.go:65] Loading cluster: ha-844661
	I1105 18:10:33.567866   32357 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:10:33.567943   32357 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:10:33.568119   32357 mustload.go:65] Loading cluster: ha-844661
	I1105 18:10:33.568246   32357 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:10:33.568268   32357 stop.go:39] StopHost: ha-844661-m04
	I1105 18:10:33.568640   32357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:10:33.568683   32357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:10:33.583576   32357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34353
	I1105 18:10:33.584028   32357 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:10:33.584528   32357 main.go:141] libmachine: Using API Version  1
	I1105 18:10:33.584580   32357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:10:33.584870   32357 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:10:33.587254   32357 out.go:177] * Stopping node "ha-844661-m04"  ...
	I1105 18:10:33.588301   32357 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1105 18:10:33.588324   32357 main.go:141] libmachine: (ha-844661-m04) Calling .DriverName
	I1105 18:10:33.588507   32357 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1105 18:10:33.588527   32357 main.go:141] libmachine: (ha-844661-m04) Calling .GetSSHHostname
	I1105 18:10:33.591283   32357 main.go:141] libmachine: (ha-844661-m04) DBG | domain ha-844661-m04 has defined MAC address 52:54:00:da:7e:cd in network mk-ha-844661
	I1105 18:10:33.591659   32357 main.go:141] libmachine: (ha-844661-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:cd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:06:57 +0000 UTC Type:0 Mac:52:54:00:da:7e:cd Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-844661-m04 Clientid:01:52:54:00:da:7e:cd}
	I1105 18:10:33.591688   32357 main.go:141] libmachine: (ha-844661-m04) DBG | domain ha-844661-m04 has defined IP address 192.168.39.89 and MAC address 52:54:00:da:7e:cd in network mk-ha-844661
	I1105 18:10:33.591891   32357 main.go:141] libmachine: (ha-844661-m04) Calling .GetSSHPort
	I1105 18:10:33.592034   32357 main.go:141] libmachine: (ha-844661-m04) Calling .GetSSHKeyPath
	I1105 18:10:33.592191   32357 main.go:141] libmachine: (ha-844661-m04) Calling .GetSSHUsername
	I1105 18:10:33.592323   32357 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m04/id_rsa Username:docker}
	I1105 18:10:33.679217   32357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1105 18:10:33.733145   32357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1105 18:10:33.786307   32357 main.go:141] libmachine: Stopping "ha-844661-m04"...
	I1105 18:10:33.786346   32357 main.go:141] libmachine: (ha-844661-m04) Calling .GetState
	I1105 18:10:33.788081   32357 main.go:141] libmachine: (ha-844661-m04) Calling .Stop
	I1105 18:10:33.792182   32357 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 0/120
	I1105 18:10:34.794509   32357 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 1/120
	I1105 18:10:35.796895   32357 main.go:141] libmachine: (ha-844661-m04) Calling .GetState
	I1105 18:10:35.798129   32357 main.go:141] libmachine: Machine "ha-844661-m04" was stopped.
	I1105 18:10:35.798143   32357 stop.go:75] duration metric: took 2.209844574s to stop
	I1105 18:10:35.798173   32357 stop.go:39] StopHost: ha-844661-m03
	I1105 18:10:35.798459   32357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:10:35.798524   32357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:10:35.813698   32357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43259
	I1105 18:10:35.814142   32357 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:10:35.814602   32357 main.go:141] libmachine: Using API Version  1
	I1105 18:10:35.814623   32357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:10:35.814958   32357 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:10:35.816882   32357 out.go:177] * Stopping node "ha-844661-m03"  ...
	I1105 18:10:35.818150   32357 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1105 18:10:35.818176   32357 main.go:141] libmachine: (ha-844661-m03) Calling .DriverName
	I1105 18:10:35.818355   32357 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1105 18:10:35.818374   32357 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHHostname
	I1105 18:10:35.821218   32357 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:10:35.821567   32357 main.go:141] libmachine: (ha-844661-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:70:0e", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:05:35 +0000 UTC Type:0 Mac:52:54:00:62:70:0e Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-844661-m03 Clientid:01:52:54:00:62:70:0e}
	I1105 18:10:35.821590   32357 main.go:141] libmachine: (ha-844661-m03) DBG | domain ha-844661-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:62:70:0e in network mk-ha-844661
	I1105 18:10:35.821728   32357 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHPort
	I1105 18:10:35.821875   32357 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHKeyPath
	I1105 18:10:35.822013   32357 main.go:141] libmachine: (ha-844661-m03) Calling .GetSSHUsername
	I1105 18:10:35.822118   32357 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m03/id_rsa Username:docker}
	I1105 18:10:35.911535   32357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1105 18:10:35.964091   32357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1105 18:10:36.017798   32357 main.go:141] libmachine: Stopping "ha-844661-m03"...
	I1105 18:10:36.017822   32357 main.go:141] libmachine: (ha-844661-m03) Calling .GetState
	I1105 18:10:36.019651   32357 main.go:141] libmachine: (ha-844661-m03) Calling .Stop
	I1105 18:10:36.023439   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 0/120
	I1105 18:10:37.024635   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 1/120
	I1105 18:10:38.025877   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 2/120
	I1105 18:10:39.027120   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 3/120
	I1105 18:10:40.028504   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 4/120
	I1105 18:10:41.030506   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 5/120
	I1105 18:10:42.031896   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 6/120
	I1105 18:10:43.033274   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 7/120
	I1105 18:10:44.035052   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 8/120
	I1105 18:10:45.036398   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 9/120
	I1105 18:10:46.038290   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 10/120
	I1105 18:10:47.039734   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 11/120
	I1105 18:10:48.041297   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 12/120
	I1105 18:10:49.042751   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 13/120
	I1105 18:10:50.044246   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 14/120
	I1105 18:10:51.045889   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 15/120
	I1105 18:10:52.047493   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 16/120
	I1105 18:10:53.049462   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 17/120
	I1105 18:10:54.050708   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 18/120
	I1105 18:10:55.053144   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 19/120
	I1105 18:10:56.055580   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 20/120
	I1105 18:10:57.056978   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 21/120
	I1105 18:10:58.058584   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 22/120
	I1105 18:10:59.060121   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 23/120
	I1105 18:11:00.061710   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 24/120
	I1105 18:11:01.063744   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 25/120
	I1105 18:11:02.065293   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 26/120
	I1105 18:11:03.066924   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 27/120
	I1105 18:11:04.068531   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 28/120
	I1105 18:11:05.069872   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 29/120
	I1105 18:11:06.072052   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 30/120
	I1105 18:11:07.073670   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 31/120
	I1105 18:11:08.075172   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 32/120
	I1105 18:11:09.077466   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 33/120
	I1105 18:11:10.078801   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 34/120
	I1105 18:11:11.080516   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 35/120
	I1105 18:11:12.081841   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 36/120
	I1105 18:11:13.083130   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 37/120
	I1105 18:11:14.084300   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 38/120
	I1105 18:11:15.085532   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 39/120
	I1105 18:11:16.087338   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 40/120
	I1105 18:11:17.088542   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 41/120
	I1105 18:11:18.089847   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 42/120
	I1105 18:11:19.091054   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 43/120
	I1105 18:11:20.092329   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 44/120
	I1105 18:11:21.094093   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 45/120
	I1105 18:11:22.095283   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 46/120
	I1105 18:11:23.097343   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 47/120
	I1105 18:11:24.098707   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 48/120
	I1105 18:11:25.100024   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 49/120
	I1105 18:11:26.101736   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 50/120
	I1105 18:11:27.103100   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 51/120
	I1105 18:11:28.104572   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 52/120
	I1105 18:11:29.105811   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 53/120
	I1105 18:11:30.107250   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 54/120
	I1105 18:11:31.109096   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 55/120
	I1105 18:11:32.110573   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 56/120
	I1105 18:11:33.111888   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 57/120
	I1105 18:11:34.113264   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 58/120
	I1105 18:11:35.115104   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 59/120
	I1105 18:11:36.116706   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 60/120
	I1105 18:11:37.117998   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 61/120
	I1105 18:11:38.119896   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 62/120
	I1105 18:11:39.121328   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 63/120
	I1105 18:11:40.122510   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 64/120
	I1105 18:11:41.124245   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 65/120
	I1105 18:11:42.125518   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 66/120
	I1105 18:11:43.126995   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 67/120
	I1105 18:11:44.128300   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 68/120
	I1105 18:11:45.129907   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 69/120
	I1105 18:11:46.131322   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 70/120
	I1105 18:11:47.132874   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 71/120
	I1105 18:11:48.134303   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 72/120
	I1105 18:11:49.135668   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 73/120
	I1105 18:11:50.137064   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 74/120
	I1105 18:11:51.139185   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 75/120
	I1105 18:11:52.140519   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 76/120
	I1105 18:11:53.141954   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 77/120
	I1105 18:11:54.143223   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 78/120
	I1105 18:11:55.144887   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 79/120
	I1105 18:11:56.146600   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 80/120
	I1105 18:11:57.148116   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 81/120
	I1105 18:11:58.149355   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 82/120
	I1105 18:11:59.150881   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 83/120
	I1105 18:12:00.152329   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 84/120
	I1105 18:12:01.154116   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 85/120
	I1105 18:12:02.155624   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 86/120
	I1105 18:12:03.157151   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 87/120
	I1105 18:12:04.158553   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 88/120
	I1105 18:12:05.159931   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 89/120
	I1105 18:12:06.161381   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 90/120
	I1105 18:12:07.162815   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 91/120
	I1105 18:12:08.164255   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 92/120
	I1105 18:12:09.165667   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 93/120
	I1105 18:12:10.167030   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 94/120
	I1105 18:12:11.169753   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 95/120
	I1105 18:12:12.171179   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 96/120
	I1105 18:12:13.173505   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 97/120
	I1105 18:12:14.174907   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 98/120
	I1105 18:12:15.176396   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 99/120
	I1105 18:12:16.177972   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 100/120
	I1105 18:12:17.179331   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 101/120
	I1105 18:12:18.180695   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 102/120
	I1105 18:12:19.182053   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 103/120
	I1105 18:12:20.183493   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 104/120
	I1105 18:12:21.185494   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 105/120
	I1105 18:12:22.187661   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 106/120
	I1105 18:12:23.189693   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 107/120
	I1105 18:12:24.191099   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 108/120
	I1105 18:12:25.192375   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 109/120
	I1105 18:12:26.194125   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 110/120
	I1105 18:12:27.195549   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 111/120
	I1105 18:12:28.196872   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 112/120
	I1105 18:12:29.198225   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 113/120
	I1105 18:12:30.199731   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 114/120
	I1105 18:12:31.201235   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 115/120
	I1105 18:12:32.202587   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 116/120
	I1105 18:12:33.204012   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 117/120
	I1105 18:12:34.205493   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 118/120
	I1105 18:12:35.207354   32357 main.go:141] libmachine: (ha-844661-m03) Waiting for machine to stop 119/120
	I1105 18:12:36.208180   32357 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1105 18:12:36.208236   32357 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1105 18:12:36.210202   32357 out.go:201] 
	W1105 18:12:36.211554   32357 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1105 18:12:36.211577   32357 out.go:270] * 
	* 
	W1105 18:12:36.214086   32357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 18:12:36.215633   32357 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-844661 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-844661 --wait=true -v=7 --alsologtostderr
E1105 18:12:59.123100   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:14:06.921429   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-844661 --wait=true -v=7 --alsologtostderr: (4m12.067862122s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-844661
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844661 -n ha-844661
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 logs -n 25: (2.088589816s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m04 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp testdata/cp-test.txt                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m04_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03:/home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m03 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-844661 node stop m02 -v=7                                                     | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-844661 node start m02 -v=7                                                    | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-844661 -v=7                                                           | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-844661 -v=7                                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-844661 --wait=true -v=7                                                    | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:12 UTC | 05 Nov 24 18:16 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-844661                                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:16 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:12:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:12:36.264651   32820 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:12:36.264750   32820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:12:36.264758   32820 out.go:358] Setting ErrFile to fd 2...
	I1105 18:12:36.264762   32820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:12:36.264969   32820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:12:36.265498   32820 out.go:352] Setting JSON to false
	I1105 18:12:36.266393   32820 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3298,"bootTime":1730827058,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:12:36.266487   32820 start.go:139] virtualization: kvm guest
	I1105 18:12:36.268655   32820 out.go:177] * [ha-844661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:12:36.270032   32820 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:12:36.270042   32820 notify.go:220] Checking for updates...
	I1105 18:12:36.272639   32820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:12:36.273735   32820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:12:36.274803   32820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:12:36.275940   32820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:12:36.277293   32820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:12:36.278893   32820 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:12:36.279005   32820 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:12:36.279416   32820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:12:36.279468   32820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:12:36.294292   32820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I1105 18:12:36.294643   32820 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:12:36.295148   32820 main.go:141] libmachine: Using API Version  1
	I1105 18:12:36.295168   32820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:12:36.295522   32820 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:12:36.295722   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:12:36.331820   32820 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:12:36.333076   32820 start.go:297] selected driver: kvm2
	I1105 18:12:36.333093   32820 start.go:901] validating driver "kvm2" against &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false defa
ult-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:12:36.333218   32820 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:12:36.333522   32820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:12:36.333587   32820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:12:36.348383   32820 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:12:36.349385   32820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:12:36.349431   32820 cni.go:84] Creating CNI manager for ""
	I1105 18:12:36.349495   32820 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1105 18:12:36.349577   32820 start.go:340] cluster config:
	{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fals
e headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:12:36.349770   32820 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:12:36.351520   32820 out.go:177] * Starting "ha-844661" primary control-plane node in "ha-844661" cluster
	I1105 18:12:36.352938   32820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:12:36.352971   32820 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:12:36.352977   32820 cache.go:56] Caching tarball of preloaded images
	I1105 18:12:36.353035   32820 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:12:36.353045   32820 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:12:36.353152   32820 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:12:36.353345   32820 start.go:360] acquireMachinesLock for ha-844661: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:12:36.353383   32820 start.go:364] duration metric: took 21.817µs to acquireMachinesLock for "ha-844661"
	I1105 18:12:36.353397   32820 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:12:36.353404   32820 fix.go:54] fixHost starting: 
	I1105 18:12:36.353644   32820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:12:36.353674   32820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:12:36.367908   32820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I1105 18:12:36.368363   32820 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:12:36.368931   32820 main.go:141] libmachine: Using API Version  1
	I1105 18:12:36.368950   32820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:12:36.369221   32820 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:12:36.369408   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:12:36.369522   32820 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:12:36.371279   32820 fix.go:112] recreateIfNeeded on ha-844661: state=Running err=<nil>
	W1105 18:12:36.371313   32820 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:12:36.373142   32820 out.go:177] * Updating the running kvm2 "ha-844661" VM ...
	I1105 18:12:36.374198   32820 machine.go:93] provisionDockerMachine start ...
	I1105 18:12:36.374217   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:12:36.374453   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.376649   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.377009   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.377036   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.377130   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:36.377277   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.377382   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.377491   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:36.377609   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:12:36.377795   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:12:36.377806   32820 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:12:36.484571   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661
	
	I1105 18:12:36.484605   32820 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:12:36.484833   32820 buildroot.go:166] provisioning hostname "ha-844661"
	I1105 18:12:36.484855   32820 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:12:36.484990   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.487717   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.488126   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.488146   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.488319   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:36.488473   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.488657   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.488808   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:36.488986   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:12:36.489184   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:12:36.489203   32820 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661 && echo "ha-844661" | sudo tee /etc/hostname
	I1105 18:12:36.614479   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661
	
	I1105 18:12:36.614503   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.617291   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.617674   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.617703   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.617970   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:36.618173   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.618300   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.618396   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:36.618529   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:12:36.618772   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:12:36.618796   32820 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:12:36.724419   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:12:36.724453   32820 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:12:36.724486   32820 buildroot.go:174] setting up certificates
	I1105 18:12:36.724494   32820 provision.go:84] configureAuth start
	I1105 18:12:36.724506   32820 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:12:36.724767   32820 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:12:36.727323   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.727631   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.727655   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.727795   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.730072   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.730531   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.730559   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.730695   32820 provision.go:143] copyHostCerts
	I1105 18:12:36.730733   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:12:36.730781   32820 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:12:36.730798   32820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:12:36.730875   32820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:12:36.730985   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:12:36.731011   32820 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:12:36.731020   32820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:12:36.731064   32820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:12:36.731124   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:12:36.731146   32820 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:12:36.731152   32820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:12:36.731174   32820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:12:36.731238   32820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661 san=[127.0.0.1 192.168.39.48 ha-844661 localhost minikube]
	I1105 18:12:36.958461   32820 provision.go:177] copyRemoteCerts
	I1105 18:12:36.958514   32820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:12:36.958537   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.961005   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.961360   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.961390   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.961558   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:36.961725   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.961853   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:36.961951   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:12:37.041731   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:12:37.041790   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:12:37.070338   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:12:37.070404   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1105 18:12:37.099030   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:12:37.099094   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:12:37.127270   32820 provision.go:87] duration metric: took 402.761718ms to configureAuth
	I1105 18:12:37.127295   32820 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:12:37.127499   32820 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:12:37.127559   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:37.130261   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:37.130632   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:37.130662   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:37.130805   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:37.131005   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:37.131135   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:37.131279   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:37.131422   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:12:37.131605   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:12:37.131631   32820 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:14:07.820819   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:14:07.820854   32820 machine.go:96] duration metric: took 1m31.446641227s to provisionDockerMachine
	I1105 18:14:07.820870   32820 start.go:293] postStartSetup for "ha-844661" (driver="kvm2")
	I1105 18:14:07.820884   32820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:14:07.820907   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:07.821236   32820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:14:07.821269   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:07.824568   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:07.825194   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:07.825216   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:07.825401   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:07.825559   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:07.825758   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:07.825919   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:14:07.906435   32820 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:14:07.910633   32820 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:14:07.910664   32820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:14:07.910729   32820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:14:07.910811   32820 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:14:07.910821   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:14:07.910923   32820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:14:07.920114   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:14:07.942958   32820 start.go:296] duration metric: took 122.072341ms for postStartSetup
	I1105 18:14:07.943017   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:07.943293   32820 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 18:14:07.943317   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:07.946000   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:07.946445   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:07.946473   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:07.946615   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:07.946762   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:07.946924   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:07.947055   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	W1105 18:14:08.024811   32820 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1105 18:14:08.024839   32820 fix.go:56] duration metric: took 1m31.67143423s for fixHost
	I1105 18:14:08.024865   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:08.027573   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.027942   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:08.027971   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.028134   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:08.028311   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:08.028446   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:08.028560   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:08.028713   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:14:08.028885   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:14:08.028895   32820 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:14:08.127575   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830448.095489424
	
	I1105 18:14:08.127598   32820 fix.go:216] guest clock: 1730830448.095489424
	I1105 18:14:08.127608   32820 fix.go:229] Guest: 2024-11-05 18:14:08.095489424 +0000 UTC Remote: 2024-11-05 18:14:08.024849059 +0000 UTC m=+91.798663283 (delta=70.640365ms)
	I1105 18:14:08.127631   32820 fix.go:200] guest clock delta is within tolerance: 70.640365ms
	I1105 18:14:08.127638   32820 start.go:83] releasing machines lock for "ha-844661", held for 1m31.774244883s
	I1105 18:14:08.127662   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:08.127967   32820 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:14:08.130885   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.131325   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:08.131351   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.131515   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:08.132023   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:08.132193   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:08.132294   32820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:14:08.132325   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:08.132369   32820 ssh_runner.go:195] Run: cat /version.json
	I1105 18:14:08.132387   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:08.135115   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.135313   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.135458   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:08.135480   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.135626   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:08.135799   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:08.135812   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:08.135817   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.135967   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:08.135986   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:08.136166   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:08.136159   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:14:08.136327   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:08.136467   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:14:08.243954   32820 ssh_runner.go:195] Run: systemctl --version
	I1105 18:14:08.249803   32820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:14:08.408281   32820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:14:08.413746   32820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:14:08.413802   32820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:14:08.422911   32820 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:14:08.422944   32820 start.go:495] detecting cgroup driver to use...
	I1105 18:14:08.423022   32820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:14:08.439441   32820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:14:08.453154   32820 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:14:08.453226   32820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:14:08.466607   32820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:14:08.480889   32820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:14:08.641584   32820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:14:08.801408   32820 docker.go:233] disabling docker service ...
	I1105 18:14:08.801480   32820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:14:08.818959   32820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:14:08.832911   32820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:14:08.974339   32820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:14:09.116439   32820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:14:09.129983   32820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:14:09.147386   32820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:14:09.147438   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.157785   32820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:14:09.157855   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.168079   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.178138   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.187990   32820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:14:09.199052   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.209900   32820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.221084   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.233167   32820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:14:09.244032   32820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:14:09.253398   32820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:14:09.395366   32820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:14:09.625677   32820 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:14:09.625758   32820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:14:09.630532   32820 start.go:563] Will wait 60s for crictl version
	I1105 18:14:09.630590   32820 ssh_runner.go:195] Run: which crictl
	I1105 18:14:09.634273   32820 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:14:09.668826   32820 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:14:09.668913   32820 ssh_runner.go:195] Run: crio --version
	I1105 18:14:09.697216   32820 ssh_runner.go:195] Run: crio --version
	I1105 18:14:09.726062   32820 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:14:09.727487   32820 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:14:09.729815   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:09.730155   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:09.730188   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:09.730419   32820 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:14:09.734905   32820 kubeadm.go:883] updating cluster {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:14:09.735069   32820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:14:09.735113   32820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:14:09.779463   32820 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:14:09.779481   32820 crio.go:433] Images already preloaded, skipping extraction
	I1105 18:14:09.779530   32820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:14:09.814166   32820 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:14:09.814190   32820 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:14:09.814201   32820 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.2 crio true true} ...
	I1105 18:14:09.814335   32820 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:14:09.814406   32820 ssh_runner.go:195] Run: crio config
	I1105 18:14:09.871793   32820 cni.go:84] Creating CNI manager for ""
	I1105 18:14:09.871812   32820 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1105 18:14:09.871820   32820 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:14:09.871847   32820 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844661 NodeName:ha-844661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:14:09.871961   32820 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.48"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:14:09.871982   32820 kube-vip.go:115] generating kube-vip config ...
	I1105 18:14:09.872030   32820 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:14:09.883010   32820 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:14:09.883133   32820 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:14:09.883195   32820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:14:09.892426   32820 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:14:09.892486   32820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 18:14:09.901149   32820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1105 18:14:09.916722   32820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:14:09.931795   32820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1105 18:14:09.947312   32820 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:14:09.963798   32820 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:14:09.968528   32820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:14:10.111653   32820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:14:10.126222   32820 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.48
	I1105 18:14:10.126247   32820 certs.go:194] generating shared ca certs ...
	I1105 18:14:10.126266   32820 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:14:10.126436   32820 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:14:10.126500   32820 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:14:10.126512   32820 certs.go:256] generating profile certs ...
	I1105 18:14:10.126589   32820 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:14:10.126617   32820 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.f667108c
	I1105 18:14:10.126629   32820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.f667108c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.52 192.168.39.254]
	I1105 18:14:10.220928   32820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.f667108c ...
	I1105 18:14:10.220961   32820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.f667108c: {Name:mka22debdd11ee8a23fa8fa253ceeed26967ff51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:14:10.221122   32820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.f667108c ...
	I1105 18:14:10.221133   32820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.f667108c: {Name:mk848c00e2ba3cfb4c1249063b757c53981df156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:14:10.221196   32820 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.f667108c -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:14:10.221353   32820 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.f667108c -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:14:10.221481   32820 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:14:10.221496   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:14:10.221508   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:14:10.221519   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:14:10.221552   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:14:10.221565   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:14:10.221578   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:14:10.221590   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:14:10.221600   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:14:10.221649   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:14:10.221675   32820 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:14:10.221684   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:14:10.221704   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:14:10.221727   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:14:10.221749   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:14:10.221785   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:14:10.221811   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:14:10.221825   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:14:10.221837   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:14:10.222385   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:14:10.247035   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:14:10.269434   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:14:10.291588   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:14:10.314043   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 18:14:10.336047   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:14:10.357978   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:14:10.379989   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:14:10.401605   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:14:10.423390   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:14:10.445276   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:14:10.467081   32820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:14:10.483399   32820 ssh_runner.go:195] Run: openssl version
	I1105 18:14:10.489446   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:14:10.499902   32820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:14:10.504193   32820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:14:10.504233   32820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:14:10.509737   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:14:10.518891   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:14:10.529521   32820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:14:10.534509   32820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:14:10.534563   32820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:14:10.539809   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:14:10.548730   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:14:10.558894   32820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:14:10.563669   32820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:14:10.563712   32820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:14:10.569167   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:14:10.578119   32820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:14:10.582345   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 18:14:10.587781   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 18:14:10.593143   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 18:14:10.598457   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 18:14:10.603956   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 18:14:10.609409   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 18:14:10.614908   32820 kubeadm.go:392] StartCluster: {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecla
ss:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:14:10.615087   32820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:14:10.615149   32820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:14:10.649544   32820 cri.go:89] found id: "f410dd632de8723060cf99fa3a27b6f67582270c89d3f6f2e065e06cb776d60c"
	I1105 18:14:10.649567   32820 cri.go:89] found id: "c118bfbaef057c7dc0a1d79e948469159301ea31c97d3e4a0795519c06de55d1"
	I1105 18:14:10.649571   32820 cri.go:89] found id: "6e98dccc93e8133f64922055804c80f2d565c34af5c38b8d799a314133853d95"
	I1105 18:14:10.649575   32820 cri.go:89] found id: "c12cccdde9a468fa488f3e906c405113374ee5a6f8160487a01d17b2a3952f04"
	I1105 18:14:10.649577   32820 cri.go:89] found id: "4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8"
	I1105 18:14:10.649580   32820 cri.go:89] found id: "2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a"
	I1105 18:14:10.649583   32820 cri.go:89] found id: "bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf"
	I1105 18:14:10.649585   32820 cri.go:89] found id: "1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6"
	I1105 18:14:10.649588   32820 cri.go:89] found id: "9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0"
	I1105 18:14:10.649592   32820 cri.go:89] found id: "f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc"
	I1105 18:14:10.649602   32820 cri.go:89] found id: "695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab"
	I1105 18:14:10.649604   32820 cri.go:89] found id: "9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c"
	I1105 18:14:10.649607   32820 cri.go:89] found id: "d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f"
	I1105 18:14:10.649610   32820 cri.go:89] found id: ""
	I1105 18:14:10.649648   32820 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844661 -n ha-844661
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (377.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 stop -v=7 --alsologtostderr
E1105 18:17:31.419521   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:19:06.921084   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844661 stop -v=7 --alsologtostderr: exit status 82 (2m0.462617486s)

                                                
                                                
-- stdout --
	* Stopping node "ha-844661-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:17:08.343093   34640 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:17:08.343326   34640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:17:08.343334   34640 out.go:358] Setting ErrFile to fd 2...
	I1105 18:17:08.343338   34640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:17:08.343501   34640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:17:08.343710   34640 out.go:352] Setting JSON to false
	I1105 18:17:08.343780   34640 mustload.go:65] Loading cluster: ha-844661
	I1105 18:17:08.344170   34640 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:17:08.344255   34640 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:17:08.344428   34640 mustload.go:65] Loading cluster: ha-844661
	I1105 18:17:08.344564   34640 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:17:08.344594   34640 stop.go:39] StopHost: ha-844661-m04
	I1105 18:17:08.345000   34640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:17:08.345037   34640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:17:08.360550   34640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40811
	I1105 18:17:08.361008   34640 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:17:08.361558   34640 main.go:141] libmachine: Using API Version  1
	I1105 18:17:08.361580   34640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:17:08.361930   34640 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:17:08.364373   34640 out.go:177] * Stopping node "ha-844661-m04"  ...
	I1105 18:17:08.366272   34640 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1105 18:17:08.366298   34640 main.go:141] libmachine: (ha-844661-m04) Calling .DriverName
	I1105 18:17:08.366522   34640 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1105 18:17:08.366554   34640 main.go:141] libmachine: (ha-844661-m04) Calling .GetSSHHostname
	I1105 18:17:08.369513   34640 main.go:141] libmachine: (ha-844661-m04) DBG | domain ha-844661-m04 has defined MAC address 52:54:00:da:7e:cd in network mk-ha-844661
	I1105 18:17:08.369909   34640 main.go:141] libmachine: (ha-844661-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:cd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:16:36 +0000 UTC Type:0 Mac:52:54:00:da:7e:cd Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-844661-m04 Clientid:01:52:54:00:da:7e:cd}
	I1105 18:17:08.369945   34640 main.go:141] libmachine: (ha-844661-m04) DBG | domain ha-844661-m04 has defined IP address 192.168.39.89 and MAC address 52:54:00:da:7e:cd in network mk-ha-844661
	I1105 18:17:08.370111   34640 main.go:141] libmachine: (ha-844661-m04) Calling .GetSSHPort
	I1105 18:17:08.370260   34640 main.go:141] libmachine: (ha-844661-m04) Calling .GetSSHKeyPath
	I1105 18:17:08.370398   34640 main.go:141] libmachine: (ha-844661-m04) Calling .GetSSHUsername
	I1105 18:17:08.370525   34640 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661-m04/id_rsa Username:docker}
	I1105 18:17:08.457229   34640 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1105 18:17:08.510254   34640 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1105 18:17:08.562739   34640 main.go:141] libmachine: Stopping "ha-844661-m04"...
	I1105 18:17:08.562763   34640 main.go:141] libmachine: (ha-844661-m04) Calling .GetState
	I1105 18:17:08.564186   34640 main.go:141] libmachine: (ha-844661-m04) Calling .Stop
	I1105 18:17:08.567608   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 0/120
	I1105 18:17:09.569044   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 1/120
	I1105 18:17:10.570500   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 2/120
	I1105 18:17:11.571890   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 3/120
	I1105 18:17:12.573147   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 4/120
	I1105 18:17:13.575024   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 5/120
	I1105 18:17:14.576343   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 6/120
	I1105 18:17:15.577576   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 7/120
	I1105 18:17:16.579095   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 8/120
	I1105 18:17:17.580460   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 9/120
	I1105 18:17:18.582638   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 10/120
	I1105 18:17:19.584095   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 11/120
	I1105 18:17:20.585230   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 12/120
	I1105 18:17:21.586537   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 13/120
	I1105 18:17:22.588030   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 14/120
	I1105 18:17:23.589790   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 15/120
	I1105 18:17:24.590985   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 16/120
	I1105 18:17:25.592332   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 17/120
	I1105 18:17:26.593673   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 18/120
	I1105 18:17:27.595841   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 19/120
	I1105 18:17:28.597884   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 20/120
	I1105 18:17:29.599100   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 21/120
	I1105 18:17:30.600382   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 22/120
	I1105 18:17:31.601830   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 23/120
	I1105 18:17:32.603308   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 24/120
	I1105 18:17:33.605239   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 25/120
	I1105 18:17:34.606671   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 26/120
	I1105 18:17:35.608097   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 27/120
	I1105 18:17:36.609614   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 28/120
	I1105 18:17:37.610998   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 29/120
	I1105 18:17:38.612475   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 30/120
	I1105 18:17:39.613821   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 31/120
	I1105 18:17:40.615263   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 32/120
	I1105 18:17:41.616596   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 33/120
	I1105 18:17:42.618755   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 34/120
	I1105 18:17:43.620676   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 35/120
	I1105 18:17:44.622023   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 36/120
	I1105 18:17:45.623658   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 37/120
	I1105 18:17:46.624905   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 38/120
	I1105 18:17:47.626818   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 39/120
	I1105 18:17:48.628401   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 40/120
	I1105 18:17:49.630588   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 41/120
	I1105 18:17:50.631924   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 42/120
	I1105 18:17:51.633277   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 43/120
	I1105 18:17:52.634841   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 44/120
	I1105 18:17:53.636773   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 45/120
	I1105 18:17:54.637985   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 46/120
	I1105 18:17:55.639448   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 47/120
	I1105 18:17:56.641583   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 48/120
	I1105 18:17:57.643375   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 49/120
	I1105 18:17:58.645327   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 50/120
	I1105 18:17:59.646882   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 51/120
	I1105 18:18:00.648307   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 52/120
	I1105 18:18:01.650222   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 53/120
	I1105 18:18:02.651579   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 54/120
	I1105 18:18:03.653355   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 55/120
	I1105 18:18:04.654586   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 56/120
	I1105 18:18:05.655953   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 57/120
	I1105 18:18:06.657418   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 58/120
	I1105 18:18:07.658727   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 59/120
	I1105 18:18:08.660347   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 60/120
	I1105 18:18:09.661610   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 61/120
	I1105 18:18:10.663034   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 62/120
	I1105 18:18:11.664149   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 63/120
	I1105 18:18:12.665401   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 64/120
	I1105 18:18:13.667140   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 65/120
	I1105 18:18:14.668409   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 66/120
	I1105 18:18:15.670108   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 67/120
	I1105 18:18:16.671402   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 68/120
	I1105 18:18:17.673420   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 69/120
	I1105 18:18:18.675302   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 70/120
	I1105 18:18:19.676731   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 71/120
	I1105 18:18:20.678004   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 72/120
	I1105 18:18:21.679615   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 73/120
	I1105 18:18:22.680865   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 74/120
	I1105 18:18:23.682377   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 75/120
	I1105 18:18:24.683869   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 76/120
	I1105 18:18:25.685322   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 77/120
	I1105 18:18:26.686794   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 78/120
	I1105 18:18:27.688111   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 79/120
	I1105 18:18:28.690050   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 80/120
	I1105 18:18:29.691323   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 81/120
	I1105 18:18:30.693529   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 82/120
	I1105 18:18:31.694781   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 83/120
	I1105 18:18:32.696142   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 84/120
	I1105 18:18:33.698099   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 85/120
	I1105 18:18:34.699406   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 86/120
	I1105 18:18:35.700721   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 87/120
	I1105 18:18:36.702060   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 88/120
	I1105 18:18:37.703817   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 89/120
	I1105 18:18:38.705402   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 90/120
	I1105 18:18:39.706767   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 91/120
	I1105 18:18:40.708091   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 92/120
	I1105 18:18:41.709888   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 93/120
	I1105 18:18:42.711210   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 94/120
	I1105 18:18:43.713346   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 95/120
	I1105 18:18:44.714822   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 96/120
	I1105 18:18:45.716577   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 97/120
	I1105 18:18:46.717925   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 98/120
	I1105 18:18:47.719596   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 99/120
	I1105 18:18:48.721382   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 100/120
	I1105 18:18:49.723605   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 101/120
	I1105 18:18:50.724971   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 102/120
	I1105 18:18:51.726046   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 103/120
	I1105 18:18:52.727417   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 104/120
	I1105 18:18:53.729403   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 105/120
	I1105 18:18:54.730847   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 106/120
	I1105 18:18:55.732192   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 107/120
	I1105 18:18:56.733545   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 108/120
	I1105 18:18:57.734964   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 109/120
	I1105 18:18:58.737061   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 110/120
	I1105 18:18:59.738550   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 111/120
	I1105 18:19:00.739823   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 112/120
	I1105 18:19:01.741671   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 113/120
	I1105 18:19:02.743123   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 114/120
	I1105 18:19:03.745341   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 115/120
	I1105 18:19:04.746491   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 116/120
	I1105 18:19:05.748063   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 117/120
	I1105 18:19:06.749512   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 118/120
	I1105 18:19:07.751025   34640 main.go:141] libmachine: (ha-844661-m04) Waiting for machine to stop 119/120
	I1105 18:19:08.752191   34640 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1105 18:19:08.752240   34640 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1105 18:19:08.754066   34640 out.go:201] 
	W1105 18:19:08.755448   34640 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1105 18:19:08.755467   34640 out.go:270] * 
	* 
	W1105 18:19:08.757680   34640 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 18:19:08.759117   34640 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-844661 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr: (18.938787076s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844661 -n ha-844661
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 logs -n 25: (1.951108469s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m04 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp testdata/cp-test.txt                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661:/home/docker/cp-test_ha-844661-m04_ha-844661.txt                       |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661 sudo cat                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661.txt                                 |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m02:/home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m02 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m03:/home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n                                                                 | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | ha-844661-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-844661 ssh -n ha-844661-m03 sudo cat                                          | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC | 05 Nov 24 18:07 UTC |
	|         | /home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-844661 node stop m02 -v=7                                                     | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-844661 node start m02 -v=7                                                    | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-844661 -v=7                                                           | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-844661 -v=7                                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-844661 --wait=true -v=7                                                    | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:12 UTC | 05 Nov 24 18:16 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-844661                                                                | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:16 UTC |                     |
	| node    | ha-844661 node delete m03 -v=7                                                   | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:16 UTC | 05 Nov 24 18:17 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-844661 stop -v=7                                                              | ha-844661 | jenkins | v1.34.0 | 05 Nov 24 18:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:12:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:12:36.264651   32820 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:12:36.264750   32820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:12:36.264758   32820 out.go:358] Setting ErrFile to fd 2...
	I1105 18:12:36.264762   32820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:12:36.264969   32820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:12:36.265498   32820 out.go:352] Setting JSON to false
	I1105 18:12:36.266393   32820 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3298,"bootTime":1730827058,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:12:36.266487   32820 start.go:139] virtualization: kvm guest
	I1105 18:12:36.268655   32820 out.go:177] * [ha-844661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:12:36.270032   32820 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:12:36.270042   32820 notify.go:220] Checking for updates...
	I1105 18:12:36.272639   32820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:12:36.273735   32820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:12:36.274803   32820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:12:36.275940   32820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:12:36.277293   32820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:12:36.278893   32820 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:12:36.279005   32820 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:12:36.279416   32820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:12:36.279468   32820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:12:36.294292   32820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I1105 18:12:36.294643   32820 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:12:36.295148   32820 main.go:141] libmachine: Using API Version  1
	I1105 18:12:36.295168   32820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:12:36.295522   32820 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:12:36.295722   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:12:36.331820   32820 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:12:36.333076   32820 start.go:297] selected driver: kvm2
	I1105 18:12:36.333093   32820 start.go:901] validating driver "kvm2" against &{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false defa
ult-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:12:36.333218   32820 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:12:36.333522   32820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:12:36.333587   32820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:12:36.348383   32820 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:12:36.349385   32820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:12:36.349431   32820 cni.go:84] Creating CNI manager for ""
	I1105 18:12:36.349495   32820 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1105 18:12:36.349577   32820 start.go:340] cluster config:
	{Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fals
e headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:12:36.349770   32820 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:12:36.351520   32820 out.go:177] * Starting "ha-844661" primary control-plane node in "ha-844661" cluster
	I1105 18:12:36.352938   32820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:12:36.352971   32820 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:12:36.352977   32820 cache.go:56] Caching tarball of preloaded images
	I1105 18:12:36.353035   32820 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:12:36.353045   32820 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:12:36.353152   32820 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/config.json ...
	I1105 18:12:36.353345   32820 start.go:360] acquireMachinesLock for ha-844661: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:12:36.353383   32820 start.go:364] duration metric: took 21.817µs to acquireMachinesLock for "ha-844661"
	I1105 18:12:36.353397   32820 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:12:36.353404   32820 fix.go:54] fixHost starting: 
	I1105 18:12:36.353644   32820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:12:36.353674   32820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:12:36.367908   32820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I1105 18:12:36.368363   32820 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:12:36.368931   32820 main.go:141] libmachine: Using API Version  1
	I1105 18:12:36.368950   32820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:12:36.369221   32820 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:12:36.369408   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:12:36.369522   32820 main.go:141] libmachine: (ha-844661) Calling .GetState
	I1105 18:12:36.371279   32820 fix.go:112] recreateIfNeeded on ha-844661: state=Running err=<nil>
	W1105 18:12:36.371313   32820 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:12:36.373142   32820 out.go:177] * Updating the running kvm2 "ha-844661" VM ...
	I1105 18:12:36.374198   32820 machine.go:93] provisionDockerMachine start ...
	I1105 18:12:36.374217   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:12:36.374453   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.376649   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.377009   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.377036   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.377130   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:36.377277   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.377382   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.377491   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:36.377609   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:12:36.377795   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:12:36.377806   32820 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:12:36.484571   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661
	
	I1105 18:12:36.484605   32820 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:12:36.484833   32820 buildroot.go:166] provisioning hostname "ha-844661"
	I1105 18:12:36.484855   32820 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:12:36.484990   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.487717   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.488126   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.488146   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.488319   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:36.488473   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.488657   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.488808   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:36.488986   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:12:36.489184   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:12:36.489203   32820 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844661 && echo "ha-844661" | sudo tee /etc/hostname
	I1105 18:12:36.614479   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844661
	
	I1105 18:12:36.614503   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.617291   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.617674   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.617703   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.617970   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:36.618173   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.618300   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.618396   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:36.618529   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:12:36.618772   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:12:36.618796   32820 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844661/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:12:36.724419   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:12:36.724453   32820 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:12:36.724486   32820 buildroot.go:174] setting up certificates
	I1105 18:12:36.724494   32820 provision.go:84] configureAuth start
	I1105 18:12:36.724506   32820 main.go:141] libmachine: (ha-844661) Calling .GetMachineName
	I1105 18:12:36.724767   32820 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:12:36.727323   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.727631   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.727655   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.727795   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.730072   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.730531   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.730559   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.730695   32820 provision.go:143] copyHostCerts
	I1105 18:12:36.730733   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:12:36.730781   32820 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:12:36.730798   32820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:12:36.730875   32820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:12:36.730985   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:12:36.731011   32820 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:12:36.731020   32820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:12:36.731064   32820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:12:36.731124   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:12:36.731146   32820 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:12:36.731152   32820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:12:36.731174   32820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:12:36.731238   32820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.ha-844661 san=[127.0.0.1 192.168.39.48 ha-844661 localhost minikube]
	I1105 18:12:36.958461   32820 provision.go:177] copyRemoteCerts
	I1105 18:12:36.958514   32820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:12:36.958537   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:36.961005   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.961360   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:36.961390   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:36.961558   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:36.961725   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:36.961853   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:36.961951   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:12:37.041731   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:12:37.041790   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:12:37.070338   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:12:37.070404   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1105 18:12:37.099030   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:12:37.099094   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:12:37.127270   32820 provision.go:87] duration metric: took 402.761718ms to configureAuth
	I1105 18:12:37.127295   32820 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:12:37.127499   32820 config.go:182] Loaded profile config "ha-844661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:12:37.127559   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:12:37.130261   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:37.130632   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:12:37.130662   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:12:37.130805   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:12:37.131005   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:37.131135   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:12:37.131279   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:12:37.131422   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:12:37.131605   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:12:37.131631   32820 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:14:07.820819   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:14:07.820854   32820 machine.go:96] duration metric: took 1m31.446641227s to provisionDockerMachine
	I1105 18:14:07.820870   32820 start.go:293] postStartSetup for "ha-844661" (driver="kvm2")
	I1105 18:14:07.820884   32820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:14:07.820907   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:07.821236   32820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:14:07.821269   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:07.824568   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:07.825194   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:07.825216   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:07.825401   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:07.825559   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:07.825758   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:07.825919   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:14:07.906435   32820 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:14:07.910633   32820 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:14:07.910664   32820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:14:07.910729   32820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:14:07.910811   32820 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:14:07.910821   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:14:07.910923   32820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:14:07.920114   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:14:07.942958   32820 start.go:296] duration metric: took 122.072341ms for postStartSetup
	I1105 18:14:07.943017   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:07.943293   32820 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 18:14:07.943317   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:07.946000   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:07.946445   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:07.946473   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:07.946615   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:07.946762   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:07.946924   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:07.947055   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	W1105 18:14:08.024811   32820 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1105 18:14:08.024839   32820 fix.go:56] duration metric: took 1m31.67143423s for fixHost
	I1105 18:14:08.024865   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:08.027573   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.027942   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:08.027971   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.028134   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:08.028311   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:08.028446   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:08.028560   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:08.028713   32820 main.go:141] libmachine: Using SSH client type: native
	I1105 18:14:08.028885   32820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1105 18:14:08.028895   32820 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:14:08.127575   32820 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830448.095489424
	
	I1105 18:14:08.127598   32820 fix.go:216] guest clock: 1730830448.095489424
	I1105 18:14:08.127608   32820 fix.go:229] Guest: 2024-11-05 18:14:08.095489424 +0000 UTC Remote: 2024-11-05 18:14:08.024849059 +0000 UTC m=+91.798663283 (delta=70.640365ms)
	I1105 18:14:08.127631   32820 fix.go:200] guest clock delta is within tolerance: 70.640365ms
	I1105 18:14:08.127638   32820 start.go:83] releasing machines lock for "ha-844661", held for 1m31.774244883s
	I1105 18:14:08.127662   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:08.127967   32820 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:14:08.130885   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.131325   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:08.131351   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.131515   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:08.132023   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:08.132193   32820 main.go:141] libmachine: (ha-844661) Calling .DriverName
	I1105 18:14:08.132294   32820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:14:08.132325   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:08.132369   32820 ssh_runner.go:195] Run: cat /version.json
	I1105 18:14:08.132387   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHHostname
	I1105 18:14:08.135115   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.135313   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.135458   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:08.135480   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.135626   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:08.135799   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:08.135812   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:08.135817   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:08.135967   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHPort
	I1105 18:14:08.135986   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:08.136166   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHKeyPath
	I1105 18:14:08.136159   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:14:08.136327   32820 main.go:141] libmachine: (ha-844661) Calling .GetSSHUsername
	I1105 18:14:08.136467   32820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/ha-844661/id_rsa Username:docker}
	I1105 18:14:08.243954   32820 ssh_runner.go:195] Run: systemctl --version
	I1105 18:14:08.249803   32820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:14:08.408281   32820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:14:08.413746   32820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:14:08.413802   32820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:14:08.422911   32820 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:14:08.422944   32820 start.go:495] detecting cgroup driver to use...
	I1105 18:14:08.423022   32820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:14:08.439441   32820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:14:08.453154   32820 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:14:08.453226   32820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:14:08.466607   32820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:14:08.480889   32820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:14:08.641584   32820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:14:08.801408   32820 docker.go:233] disabling docker service ...
	I1105 18:14:08.801480   32820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:14:08.818959   32820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:14:08.832911   32820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:14:08.974339   32820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:14:09.116439   32820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:14:09.129983   32820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:14:09.147386   32820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:14:09.147438   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.157785   32820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:14:09.157855   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.168079   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.178138   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.187990   32820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:14:09.199052   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.209900   32820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.221084   32820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:14:09.233167   32820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:14:09.244032   32820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:14:09.253398   32820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:14:09.395366   32820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:14:09.625677   32820 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:14:09.625758   32820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:14:09.630532   32820 start.go:563] Will wait 60s for crictl version
	I1105 18:14:09.630590   32820 ssh_runner.go:195] Run: which crictl
	I1105 18:14:09.634273   32820 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:14:09.668826   32820 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:14:09.668913   32820 ssh_runner.go:195] Run: crio --version
	I1105 18:14:09.697216   32820 ssh_runner.go:195] Run: crio --version
	I1105 18:14:09.726062   32820 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:14:09.727487   32820 main.go:141] libmachine: (ha-844661) Calling .GetIP
	I1105 18:14:09.729815   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:09.730155   32820 main.go:141] libmachine: (ha-844661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:57:dd", ip: ""} in network mk-ha-844661: {Iface:virbr1 ExpiryTime:2024-11-05 19:03:34 +0000 UTC Type:0 Mac:52:54:00:ba:57:dd Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-844661 Clientid:01:52:54:00:ba:57:dd}
	I1105 18:14:09.730188   32820 main.go:141] libmachine: (ha-844661) DBG | domain ha-844661 has defined IP address 192.168.39.48 and MAC address 52:54:00:ba:57:dd in network mk-ha-844661
	I1105 18:14:09.730419   32820 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:14:09.734905   32820 kubeadm.go:883] updating cluster {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:14:09.735069   32820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:14:09.735113   32820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:14:09.779463   32820 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:14:09.779481   32820 crio.go:433] Images already preloaded, skipping extraction
	I1105 18:14:09.779530   32820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:14:09.814166   32820 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:14:09.814190   32820 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:14:09.814201   32820 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.2 crio true true} ...
	I1105 18:14:09.814335   32820 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:14:09.814406   32820 ssh_runner.go:195] Run: crio config
	I1105 18:14:09.871793   32820 cni.go:84] Creating CNI manager for ""
	I1105 18:14:09.871812   32820 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1105 18:14:09.871820   32820 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:14:09.871847   32820 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844661 NodeName:ha-844661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:14:09.871961   32820 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.48"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:14:09.871982   32820 kube-vip.go:115] generating kube-vip config ...
	I1105 18:14:09.872030   32820 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 18:14:09.883010   32820 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:14:09.883133   32820 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:14:09.883195   32820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:14:09.892426   32820 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:14:09.892486   32820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 18:14:09.901149   32820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1105 18:14:09.916722   32820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:14:09.931795   32820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1105 18:14:09.947312   32820 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:14:09.963798   32820 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:14:09.968528   32820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:14:10.111653   32820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:14:10.126222   32820 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661 for IP: 192.168.39.48
	I1105 18:14:10.126247   32820 certs.go:194] generating shared ca certs ...
	I1105 18:14:10.126266   32820 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:14:10.126436   32820 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:14:10.126500   32820 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:14:10.126512   32820 certs.go:256] generating profile certs ...
	I1105 18:14:10.126589   32820 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/client.key
	I1105 18:14:10.126617   32820 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.f667108c
	I1105 18:14:10.126629   32820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.f667108c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48 192.168.39.38 192.168.39.52 192.168.39.254]
	I1105 18:14:10.220928   32820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.f667108c ...
	I1105 18:14:10.220961   32820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.f667108c: {Name:mka22debdd11ee8a23fa8fa253ceeed26967ff51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:14:10.221122   32820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.f667108c ...
	I1105 18:14:10.221133   32820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.f667108c: {Name:mk848c00e2ba3cfb4c1249063b757c53981df156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:14:10.221196   32820 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt.f667108c -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt
	I1105 18:14:10.221353   32820 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key.f667108c -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key
	I1105 18:14:10.221481   32820 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key
	I1105 18:14:10.221496   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:14:10.221508   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:14:10.221519   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:14:10.221552   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:14:10.221565   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:14:10.221578   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:14:10.221590   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:14:10.221600   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:14:10.221649   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:14:10.221675   32820 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:14:10.221684   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:14:10.221704   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:14:10.221727   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:14:10.221749   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:14:10.221785   32820 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:14:10.221811   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:14:10.221825   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:14:10.221837   32820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:14:10.222385   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:14:10.247035   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:14:10.269434   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:14:10.291588   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:14:10.314043   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 18:14:10.336047   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:14:10.357978   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:14:10.379989   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/ha-844661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:14:10.401605   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:14:10.423390   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:14:10.445276   32820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:14:10.467081   32820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:14:10.483399   32820 ssh_runner.go:195] Run: openssl version
	I1105 18:14:10.489446   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:14:10.499902   32820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:14:10.504193   32820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:14:10.504233   32820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:14:10.509737   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:14:10.518891   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:14:10.529521   32820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:14:10.534509   32820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:14:10.534563   32820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:14:10.539809   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:14:10.548730   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:14:10.558894   32820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:14:10.563669   32820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:14:10.563712   32820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:14:10.569167   32820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:14:10.578119   32820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:14:10.582345   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 18:14:10.587781   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 18:14:10.593143   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 18:14:10.598457   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 18:14:10.603956   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 18:14:10.609409   32820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 18:14:10.614908   32820 kubeadm.go:392] StartCluster: {Name:ha-844661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-844661 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.52 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecla
ss:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:14:10.615087   32820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:14:10.615149   32820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:14:10.649544   32820 cri.go:89] found id: "f410dd632de8723060cf99fa3a27b6f67582270c89d3f6f2e065e06cb776d60c"
	I1105 18:14:10.649567   32820 cri.go:89] found id: "c118bfbaef057c7dc0a1d79e948469159301ea31c97d3e4a0795519c06de55d1"
	I1105 18:14:10.649571   32820 cri.go:89] found id: "6e98dccc93e8133f64922055804c80f2d565c34af5c38b8d799a314133853d95"
	I1105 18:14:10.649575   32820 cri.go:89] found id: "c12cccdde9a468fa488f3e906c405113374ee5a6f8160487a01d17b2a3952f04"
	I1105 18:14:10.649577   32820 cri.go:89] found id: "4504233c88e522a7826c9501cbe65f5f6c480a91773993fd941d6b6a98dd86c8"
	I1105 18:14:10.649580   32820 cri.go:89] found id: "2c9fc5d833b4184cb254fea8c8b61b2ae22665ae87095d16614fa21b4f2c061a"
	I1105 18:14:10.649583   32820 cri.go:89] found id: "bf77486744a30a06fb149591f9ca8b751c63a55561f16eab4481331438ef4acf"
	I1105 18:14:10.649585   32820 cri.go:89] found id: "1c753c07805a44aacc5012c6fa7057691adc127301afedfcae0a25fc9dd924d6"
	I1105 18:14:10.649588   32820 cri.go:89] found id: "9fc39705114925fd0178d16b6065166d69ab346ba081ad5aea897f31565f59b0"
	I1105 18:14:10.649592   32820 cri.go:89] found id: "f06b75f1a25013d5ba62df6acb6480ac8c4d155aff30bb297c085f255453d5fc"
	I1105 18:14:10.649602   32820 cri.go:89] found id: "695ba2636aaa9959e0e13cdd3345ebca43a0ce9d14f5a016e4a6a53ecc9d4dab"
	I1105 18:14:10.649604   32820 cri.go:89] found id: "9fc529f9c17c8050437784a9860055c341ef8c1f5e98dd813c6e76f8f8198f0c"
	I1105 18:14:10.649607   32820 cri.go:89] found id: "d6c4df079853940688a343f0a864835b2c051f0bbbf0ade864e5724fb991cc8f"
	I1105 18:14:10.649610   32820 cri.go:89] found id: ""
	I1105 18:14:10.649648   32820 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844661 -n ha-844661
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-501442
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-501442
E1105 18:34:06.921201   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-501442: exit status 82 (2m1.789994251s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-501442-m03"  ...
	* Stopping node "multinode-501442-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-501442" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501442 --wait=true -v=8 --alsologtostderr
E1105 18:37:31.422713   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:39:06.921614   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-501442 --wait=true -v=8 --alsologtostderr: (3m21.4203736s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-501442
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-501442 -n multinode-501442
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-501442 logs -n 25: (1.969539591s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m02:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3513316962/001/cp-test_multinode-501442-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m02:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442:/home/docker/cp-test_multinode-501442-m02_multinode-501442.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442 sudo cat                                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m02_multinode-501442.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m02:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03:/home/docker/cp-test_multinode-501442-m02_multinode-501442-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442-m03 sudo cat                                   | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m02_multinode-501442-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp testdata/cp-test.txt                                                | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3513316962/001/cp-test_multinode-501442-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442:/home/docker/cp-test_multinode-501442-m03_multinode-501442.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442 sudo cat                                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m03_multinode-501442.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02:/home/docker/cp-test_multinode-501442-m03_multinode-501442-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442-m02 sudo cat                                   | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m03_multinode-501442-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-501442 node stop m03                                                          | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	| node    | multinode-501442 node start                                                             | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:34 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-501442                                                                | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:34 UTC |                     |
	| stop    | -p multinode-501442                                                                     | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:34 UTC |                     |
	| start   | -p multinode-501442                                                                     | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:36 UTC | 05 Nov 24 18:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-501442                                                                | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:36:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:36:02.962285   44959 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:36:02.962422   44959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:36:02.962431   44959 out.go:358] Setting ErrFile to fd 2...
	I1105 18:36:02.962435   44959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:36:02.962630   44959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:36:02.963250   44959 out.go:352] Setting JSON to false
	I1105 18:36:02.964143   44959 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4705,"bootTime":1730827058,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:36:02.964240   44959 start.go:139] virtualization: kvm guest
	I1105 18:36:02.966468   44959 out.go:177] * [multinode-501442] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:36:02.967768   44959 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:36:02.967793   44959 notify.go:220] Checking for updates...
	I1105 18:36:02.970165   44959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:36:02.971529   44959 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:36:02.972806   44959 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:36:02.974014   44959 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:36:02.975356   44959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:36:02.977032   44959 config.go:182] Loaded profile config "multinode-501442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:36:02.977150   44959 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:36:02.977620   44959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:36:02.977670   44959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:36:02.993248   44959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33587
	I1105 18:36:02.993835   44959 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:36:02.994481   44959 main.go:141] libmachine: Using API Version  1
	I1105 18:36:02.994503   44959 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:36:02.994899   44959 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:36:02.995125   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:36:03.032468   44959 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:36:03.033828   44959 start.go:297] selected driver: kvm2
	I1105 18:36:03.033844   44959 start.go:901] validating driver "kvm2" against &{Name:multinode-501442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-501442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:36:03.033998   44959 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:36:03.034326   44959 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:36:03.034411   44959 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:36:03.050322   44959 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:36:03.051286   44959 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:36:03.051330   44959 cni.go:84] Creating CNI manager for ""
	I1105 18:36:03.051394   44959 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 18:36:03.051467   44959 start.go:340] cluster config:
	{Name:multinode-501442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-501442 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:36:03.051646   44959 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:36:03.053774   44959 out.go:177] * Starting "multinode-501442" primary control-plane node in "multinode-501442" cluster
	I1105 18:36:03.055042   44959 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:36:03.055083   44959 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:36:03.055090   44959 cache.go:56] Caching tarball of preloaded images
	I1105 18:36:03.055178   44959 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:36:03.055192   44959 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:36:03.055367   44959 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/config.json ...
	I1105 18:36:03.055614   44959 start.go:360] acquireMachinesLock for multinode-501442: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:36:03.055658   44959 start.go:364] duration metric: took 23.718µs to acquireMachinesLock for "multinode-501442"
	I1105 18:36:03.055673   44959 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:36:03.055681   44959 fix.go:54] fixHost starting: 
	I1105 18:36:03.056036   44959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:36:03.056072   44959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:36:03.070656   44959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1105 18:36:03.071125   44959 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:36:03.071686   44959 main.go:141] libmachine: Using API Version  1
	I1105 18:36:03.071711   44959 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:36:03.072033   44959 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:36:03.072235   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:36:03.072397   44959 main.go:141] libmachine: (multinode-501442) Calling .GetState
	I1105 18:36:03.073915   44959 fix.go:112] recreateIfNeeded on multinode-501442: state=Running err=<nil>
	W1105 18:36:03.073937   44959 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:36:03.075985   44959 out.go:177] * Updating the running kvm2 "multinode-501442" VM ...
	I1105 18:36:03.077407   44959 machine.go:93] provisionDockerMachine start ...
	I1105 18:36:03.077432   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:36:03.077642   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.080113   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.080561   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.080595   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.080765   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.080930   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.081081   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.081192   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.081366   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:36:03.081568   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:36:03.081579   44959 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:36:03.195936   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-501442
	
	I1105 18:36:03.195976   44959 main.go:141] libmachine: (multinode-501442) Calling .GetMachineName
	I1105 18:36:03.196262   44959 buildroot.go:166] provisioning hostname "multinode-501442"
	I1105 18:36:03.196294   44959 main.go:141] libmachine: (multinode-501442) Calling .GetMachineName
	I1105 18:36:03.196544   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.199085   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.199492   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.199521   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.199695   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.199866   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.200045   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.200179   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.200362   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:36:03.200518   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:36:03.200528   44959 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-501442 && echo "multinode-501442" | sudo tee /etc/hostname
	I1105 18:36:03.330594   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-501442
	
	I1105 18:36:03.330627   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.333516   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.333915   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.333945   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.334087   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.334284   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.334496   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.334735   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.334932   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:36:03.335156   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:36:03.335174   44959 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-501442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-501442/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-501442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:36:03.452265   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:36:03.452300   44959 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:36:03.452351   44959 buildroot.go:174] setting up certificates
	I1105 18:36:03.452367   44959 provision.go:84] configureAuth start
	I1105 18:36:03.452380   44959 main.go:141] libmachine: (multinode-501442) Calling .GetMachineName
	I1105 18:36:03.452695   44959 main.go:141] libmachine: (multinode-501442) Calling .GetIP
	I1105 18:36:03.455502   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.455875   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.455906   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.456061   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.458330   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.458599   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.458633   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.458795   44959 provision.go:143] copyHostCerts
	I1105 18:36:03.458828   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:36:03.458870   44959 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:36:03.458886   44959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:36:03.458990   44959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:36:03.459110   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:36:03.459136   44959 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:36:03.459144   44959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:36:03.459187   44959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:36:03.459267   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:36:03.459300   44959 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:36:03.459310   44959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:36:03.459345   44959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:36:03.459432   44959 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.multinode-501442 san=[127.0.0.1 192.168.39.235 localhost minikube multinode-501442]
	I1105 18:36:03.627180   44959 provision.go:177] copyRemoteCerts
	I1105 18:36:03.627245   44959 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:36:03.627274   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.630165   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.630528   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.630562   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.630712   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.630932   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.631164   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.631291   44959 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442/id_rsa Username:docker}
	I1105 18:36:03.718061   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:36:03.718143   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:36:03.742436   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:36:03.742499   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1105 18:36:03.765895   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:36:03.765971   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:36:03.789139   44959 provision.go:87] duration metric: took 336.758403ms to configureAuth
	I1105 18:36:03.789167   44959 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:36:03.789375   44959 config.go:182] Loaded profile config "multinode-501442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:36:03.789445   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.792249   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.792547   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.792575   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.792764   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.792965   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.793162   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.793297   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.793433   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:36:03.793664   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:36:03.793685   44959 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:37:34.424465   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:37:34.424490   44959 machine.go:96] duration metric: took 1m31.347066615s to provisionDockerMachine
	I1105 18:37:34.424509   44959 start.go:293] postStartSetup for "multinode-501442" (driver="kvm2")
	I1105 18:37:34.424523   44959 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:37:34.424547   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.424857   44959 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:37:34.424905   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:37:34.428050   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.428503   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.428530   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.428785   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:37:34.428971   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.429120   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:37:34.429265   44959 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442/id_rsa Username:docker}
	I1105 18:37:34.518656   44959 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:37:34.522499   44959 command_runner.go:130] > NAME=Buildroot
	I1105 18:37:34.522522   44959 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1105 18:37:34.522529   44959 command_runner.go:130] > ID=buildroot
	I1105 18:37:34.522545   44959 command_runner.go:130] > VERSION_ID=2023.02.9
	I1105 18:37:34.522552   44959 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1105 18:37:34.522636   44959 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:37:34.522670   44959 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:37:34.522749   44959 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:37:34.522844   44959 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:37:34.522856   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:37:34.522987   44959 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:37:34.532241   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:37:34.555002   44959 start.go:296] duration metric: took 130.47732ms for postStartSetup
	I1105 18:37:34.555058   44959 fix.go:56] duration metric: took 1m31.499375969s for fixHost
	I1105 18:37:34.555082   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:37:34.557816   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.558161   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.558184   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.558388   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:37:34.558582   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.558759   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.558892   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:37:34.559126   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:37:34.559318   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:37:34.559335   44959 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:37:34.671504   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730831854.647121236
	
	I1105 18:37:34.671533   44959 fix.go:216] guest clock: 1730831854.647121236
	I1105 18:37:34.671540   44959 fix.go:229] Guest: 2024-11-05 18:37:34.647121236 +0000 UTC Remote: 2024-11-05 18:37:34.555064873 +0000 UTC m=+91.633953874 (delta=92.056363ms)
	I1105 18:37:34.671563   44959 fix.go:200] guest clock delta is within tolerance: 92.056363ms
	I1105 18:37:34.671570   44959 start.go:83] releasing machines lock for "multinode-501442", held for 1m31.615905036s
	I1105 18:37:34.671593   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.671864   44959 main.go:141] libmachine: (multinode-501442) Calling .GetIP
	I1105 18:37:34.675007   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.675534   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.675553   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.675770   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.676353   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.676532   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.676645   44959 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:37:34.676711   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:37:34.676765   44959 ssh_runner.go:195] Run: cat /version.json
	I1105 18:37:34.676792   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:37:34.679346   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.679598   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.679752   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.679789   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.679917   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:37:34.680054   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.680069   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.680076   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.680217   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:37:34.680239   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:37:34.680346   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.680344   44959 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442/id_rsa Username:docker}
	I1105 18:37:34.680455   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:37:34.680559   44959 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442/id_rsa Username:docker}
	I1105 18:37:34.759511   44959 command_runner.go:130] > {"iso_version": "v1.34.0-1730282777-19883", "kicbase_version": "v0.0.45-1730110049-19872", "minikube_version": "v1.34.0", "commit": "7738213fbe7cb3f4867f3e3b534798700ea0e3fb"}
	I1105 18:37:34.787765   44959 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1105 18:37:34.788471   44959 ssh_runner.go:195] Run: systemctl --version
	I1105 18:37:34.794865   44959 command_runner.go:130] > systemd 252 (252)
	I1105 18:37:34.794904   44959 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1105 18:37:34.794978   44959 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:37:34.953687   44959 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 18:37:34.959327   44959 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1105 18:37:34.959382   44959 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:37:34.959428   44959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:37:34.968202   44959 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:37:34.968224   44959 start.go:495] detecting cgroup driver to use...
	I1105 18:37:34.968354   44959 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:37:34.983761   44959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:37:34.997177   44959 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:37:34.997245   44959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:37:35.010296   44959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:37:35.023735   44959 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:37:35.174162   44959 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:37:35.307417   44959 docker.go:233] disabling docker service ...
	I1105 18:37:35.307493   44959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:37:35.324046   44959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:37:35.337088   44959 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:37:35.481162   44959 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:37:35.625311   44959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:37:35.640563   44959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:37:35.657964   44959 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1105 18:37:35.658394   44959 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:37:35.658451   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.668360   44959 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:37:35.668440   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.678403   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.688139   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.697889   44959 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:37:35.707971   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.717816   44959 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.727750   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.737841   44959 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:37:35.747080   44959 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1105 18:37:35.747177   44959 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:37:35.756398   44959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:37:35.897124   44959 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:37:38.672205   44959 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.775043085s)
	I1105 18:37:38.672236   44959 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:37:38.672278   44959 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:37:38.677963   44959 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1105 18:37:38.677994   44959 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1105 18:37:38.678004   44959 command_runner.go:130] > Device: 0,22	Inode: 1301        Links: 1
	I1105 18:37:38.678012   44959 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1105 18:37:38.678017   44959 command_runner.go:130] > Access: 2024-11-05 18:37:38.582013562 +0000
	I1105 18:37:38.678023   44959 command_runner.go:130] > Modify: 2024-11-05 18:37:38.548012703 +0000
	I1105 18:37:38.678027   44959 command_runner.go:130] > Change: 2024-11-05 18:37:38.548012703 +0000
	I1105 18:37:38.678031   44959 command_runner.go:130] >  Birth: -
	I1105 18:37:38.678130   44959 start.go:563] Will wait 60s for crictl version
	I1105 18:37:38.678201   44959 ssh_runner.go:195] Run: which crictl
	I1105 18:37:38.681786   44959 command_runner.go:130] > /usr/bin/crictl
	I1105 18:37:38.681935   44959 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:37:38.715851   44959 command_runner.go:130] > Version:  0.1.0
	I1105 18:37:38.715878   44959 command_runner.go:130] > RuntimeName:  cri-o
	I1105 18:37:38.715885   44959 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1105 18:37:38.715892   44959 command_runner.go:130] > RuntimeApiVersion:  v1
	I1105 18:37:38.715912   44959 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:37:38.715988   44959 ssh_runner.go:195] Run: crio --version
	I1105 18:37:38.741581   44959 command_runner.go:130] > crio version 1.29.1
	I1105 18:37:38.741602   44959 command_runner.go:130] > Version:        1.29.1
	I1105 18:37:38.741611   44959 command_runner.go:130] > GitCommit:      unknown
	I1105 18:37:38.741618   44959 command_runner.go:130] > GitCommitDate:  unknown
	I1105 18:37:38.741624   44959 command_runner.go:130] > GitTreeState:   clean
	I1105 18:37:38.741634   44959 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1105 18:37:38.741640   44959 command_runner.go:130] > GoVersion:      go1.21.6
	I1105 18:37:38.741644   44959 command_runner.go:130] > Compiler:       gc
	I1105 18:37:38.741648   44959 command_runner.go:130] > Platform:       linux/amd64
	I1105 18:37:38.741652   44959 command_runner.go:130] > Linkmode:       dynamic
	I1105 18:37:38.741656   44959 command_runner.go:130] > BuildTags:      
	I1105 18:37:38.741660   44959 command_runner.go:130] >   containers_image_ostree_stub
	I1105 18:37:38.741665   44959 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1105 18:37:38.741668   44959 command_runner.go:130] >   btrfs_noversion
	I1105 18:37:38.741673   44959 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1105 18:37:38.741680   44959 command_runner.go:130] >   libdm_no_deferred_remove
	I1105 18:37:38.741684   44959 command_runner.go:130] >   seccomp
	I1105 18:37:38.741688   44959 command_runner.go:130] > LDFlags:          unknown
	I1105 18:37:38.741695   44959 command_runner.go:130] > SeccompEnabled:   true
	I1105 18:37:38.741714   44959 command_runner.go:130] > AppArmorEnabled:  false
	I1105 18:37:38.742880   44959 ssh_runner.go:195] Run: crio --version
	I1105 18:37:38.769501   44959 command_runner.go:130] > crio version 1.29.1
	I1105 18:37:38.769532   44959 command_runner.go:130] > Version:        1.29.1
	I1105 18:37:38.769541   44959 command_runner.go:130] > GitCommit:      unknown
	I1105 18:37:38.769547   44959 command_runner.go:130] > GitCommitDate:  unknown
	I1105 18:37:38.769553   44959 command_runner.go:130] > GitTreeState:   clean
	I1105 18:37:38.769561   44959 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1105 18:37:38.769566   44959 command_runner.go:130] > GoVersion:      go1.21.6
	I1105 18:37:38.769570   44959 command_runner.go:130] > Compiler:       gc
	I1105 18:37:38.769574   44959 command_runner.go:130] > Platform:       linux/amd64
	I1105 18:37:38.769578   44959 command_runner.go:130] > Linkmode:       dynamic
	I1105 18:37:38.769589   44959 command_runner.go:130] > BuildTags:      
	I1105 18:37:38.769596   44959 command_runner.go:130] >   containers_image_ostree_stub
	I1105 18:37:38.769605   44959 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1105 18:37:38.769611   44959 command_runner.go:130] >   btrfs_noversion
	I1105 18:37:38.769620   44959 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1105 18:37:38.769627   44959 command_runner.go:130] >   libdm_no_deferred_remove
	I1105 18:37:38.769635   44959 command_runner.go:130] >   seccomp
	I1105 18:37:38.769640   44959 command_runner.go:130] > LDFlags:          unknown
	I1105 18:37:38.769644   44959 command_runner.go:130] > SeccompEnabled:   true
	I1105 18:37:38.769648   44959 command_runner.go:130] > AppArmorEnabled:  false
	I1105 18:37:38.772399   44959 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:37:38.773588   44959 main.go:141] libmachine: (multinode-501442) Calling .GetIP
	I1105 18:37:38.775860   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:38.776187   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:38.776210   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:38.776418   44959 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:37:38.780452   44959 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1105 18:37:38.780549   44959 kubeadm.go:883] updating cluster {Name:multinode-501442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-501442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:37:38.780672   44959 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:37:38.780711   44959 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:37:38.825221   44959 command_runner.go:130] > {
	I1105 18:37:38.825248   44959 command_runner.go:130] >   "images": [
	I1105 18:37:38.825252   44959 command_runner.go:130] >     {
	I1105 18:37:38.825260   44959 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1105 18:37:38.825264   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825270   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1105 18:37:38.825274   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825277   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825285   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1105 18:37:38.825291   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1105 18:37:38.825294   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825300   44959 command_runner.go:130] >       "size": "94965812",
	I1105 18:37:38.825307   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825322   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.825338   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825344   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825350   44959 command_runner.go:130] >     },
	I1105 18:37:38.825354   44959 command_runner.go:130] >     {
	I1105 18:37:38.825362   44959 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1105 18:37:38.825368   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825373   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1105 18:37:38.825379   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825383   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825398   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1105 18:37:38.825414   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1105 18:37:38.825423   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825432   44959 command_runner.go:130] >       "size": "94958644",
	I1105 18:37:38.825442   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825456   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.825464   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825468   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825476   44959 command_runner.go:130] >     },
	I1105 18:37:38.825485   44959 command_runner.go:130] >     {
	I1105 18:37:38.825497   44959 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1105 18:37:38.825506   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825514   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1105 18:37:38.825523   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825529   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825543   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1105 18:37:38.825553   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1105 18:37:38.825559   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825568   44959 command_runner.go:130] >       "size": "1363676",
	I1105 18:37:38.825578   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825588   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.825597   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825607   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825621   44959 command_runner.go:130] >     },
	I1105 18:37:38.825629   44959 command_runner.go:130] >     {
	I1105 18:37:38.825639   44959 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1105 18:37:38.825648   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825658   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1105 18:37:38.825667   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825674   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825689   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1105 18:37:38.825712   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1105 18:37:38.825720   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825725   44959 command_runner.go:130] >       "size": "31470524",
	I1105 18:37:38.825731   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825738   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.825747   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825757   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825765   44959 command_runner.go:130] >     },
	I1105 18:37:38.825773   44959 command_runner.go:130] >     {
	I1105 18:37:38.825787   44959 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1105 18:37:38.825796   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825805   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1105 18:37:38.825811   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825818   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825833   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1105 18:37:38.825848   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1105 18:37:38.825857   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825872   44959 command_runner.go:130] >       "size": "63273227",
	I1105 18:37:38.825881   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825889   44959 command_runner.go:130] >       "username": "nonroot",
	I1105 18:37:38.825893   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825897   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825905   44959 command_runner.go:130] >     },
	I1105 18:37:38.825913   44959 command_runner.go:130] >     {
	I1105 18:37:38.825926   44959 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1105 18:37:38.825942   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825953   44959 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1105 18:37:38.825962   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825968   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825976   44959 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1105 18:37:38.825988   44959 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1105 18:37:38.825997   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826004   44959 command_runner.go:130] >       "size": "149009664",
	I1105 18:37:38.826013   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826019   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.826028   44959 command_runner.go:130] >       },
	I1105 18:37:38.826036   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826044   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826053   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826060   44959 command_runner.go:130] >     },
	I1105 18:37:38.826064   44959 command_runner.go:130] >     {
	I1105 18:37:38.826071   44959 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1105 18:37:38.826080   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826089   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1105 18:37:38.826098   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826104   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826118   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1105 18:37:38.826132   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1105 18:37:38.826140   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826147   44959 command_runner.go:130] >       "size": "95274464",
	I1105 18:37:38.826156   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826163   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.826172   44959 command_runner.go:130] >       },
	I1105 18:37:38.826264   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826294   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826302   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826307   44959 command_runner.go:130] >     },
	I1105 18:37:38.826312   44959 command_runner.go:130] >     {
	I1105 18:37:38.826345   44959 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1105 18:37:38.826355   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826364   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1105 18:37:38.826372   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826377   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826400   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1105 18:37:38.826410   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1105 18:37:38.826416   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826420   44959 command_runner.go:130] >       "size": "89474374",
	I1105 18:37:38.826426   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826430   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.826443   44959 command_runner.go:130] >       },
	I1105 18:37:38.826447   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826451   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826455   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826458   44959 command_runner.go:130] >     },
	I1105 18:37:38.826461   44959 command_runner.go:130] >     {
	I1105 18:37:38.826466   44959 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1105 18:37:38.826470   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826475   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1105 18:37:38.826478   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826486   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826494   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1105 18:37:38.826503   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1105 18:37:38.826506   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826510   44959 command_runner.go:130] >       "size": "92783513",
	I1105 18:37:38.826517   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.826520   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826524   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826528   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826531   44959 command_runner.go:130] >     },
	I1105 18:37:38.826534   44959 command_runner.go:130] >     {
	I1105 18:37:38.826540   44959 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1105 18:37:38.826551   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826558   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1105 18:37:38.826562   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826566   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826577   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1105 18:37:38.826584   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1105 18:37:38.826590   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826595   44959 command_runner.go:130] >       "size": "68457798",
	I1105 18:37:38.826601   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826604   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.826608   44959 command_runner.go:130] >       },
	I1105 18:37:38.826612   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826616   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826619   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826623   44959 command_runner.go:130] >     },
	I1105 18:37:38.826626   44959 command_runner.go:130] >     {
	I1105 18:37:38.826655   44959 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1105 18:37:38.826661   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826666   44959 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1105 18:37:38.826672   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826676   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826685   44959 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1105 18:37:38.826694   44959 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1105 18:37:38.826705   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826711   44959 command_runner.go:130] >       "size": "742080",
	I1105 18:37:38.826715   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826721   44959 command_runner.go:130] >         "value": "65535"
	I1105 18:37:38.826725   44959 command_runner.go:130] >       },
	I1105 18:37:38.826731   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826735   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826740   44959 command_runner.go:130] >       "pinned": true
	I1105 18:37:38.826744   44959 command_runner.go:130] >     }
	I1105 18:37:38.826748   44959 command_runner.go:130] >   ]
	I1105 18:37:38.826756   44959 command_runner.go:130] > }
	I1105 18:37:38.826953   44959 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:37:38.826964   44959 crio.go:433] Images already preloaded, skipping extraction
	I1105 18:37:38.827039   44959 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:37:38.856940   44959 command_runner.go:130] > {
	I1105 18:37:38.856962   44959 command_runner.go:130] >   "images": [
	I1105 18:37:38.856966   44959 command_runner.go:130] >     {
	I1105 18:37:38.856974   44959 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1105 18:37:38.856984   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.856990   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1105 18:37:38.856993   44959 command_runner.go:130] >       ],
	I1105 18:37:38.856997   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857005   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1105 18:37:38.857012   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1105 18:37:38.857015   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857020   44959 command_runner.go:130] >       "size": "94965812",
	I1105 18:37:38.857023   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857027   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857031   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857035   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857038   44959 command_runner.go:130] >     },
	I1105 18:37:38.857041   44959 command_runner.go:130] >     {
	I1105 18:37:38.857047   44959 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1105 18:37:38.857050   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857056   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1105 18:37:38.857062   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857066   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857073   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1105 18:37:38.857079   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1105 18:37:38.857086   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857090   44959 command_runner.go:130] >       "size": "94958644",
	I1105 18:37:38.857093   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857101   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857106   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857111   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857115   44959 command_runner.go:130] >     },
	I1105 18:37:38.857119   44959 command_runner.go:130] >     {
	I1105 18:37:38.857124   44959 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1105 18:37:38.857130   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857136   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1105 18:37:38.857140   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857148   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857157   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1105 18:37:38.857164   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1105 18:37:38.857170   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857174   44959 command_runner.go:130] >       "size": "1363676",
	I1105 18:37:38.857179   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857184   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857191   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857195   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857199   44959 command_runner.go:130] >     },
	I1105 18:37:38.857202   44959 command_runner.go:130] >     {
	I1105 18:37:38.857208   44959 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1105 18:37:38.857214   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857220   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1105 18:37:38.857225   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857229   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857236   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1105 18:37:38.857251   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1105 18:37:38.857257   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857261   44959 command_runner.go:130] >       "size": "31470524",
	I1105 18:37:38.857268   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857272   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857275   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857279   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857283   44959 command_runner.go:130] >     },
	I1105 18:37:38.857286   44959 command_runner.go:130] >     {
	I1105 18:37:38.857292   44959 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1105 18:37:38.857297   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857301   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1105 18:37:38.857308   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857311   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857320   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1105 18:37:38.857333   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1105 18:37:38.857342   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857349   44959 command_runner.go:130] >       "size": "63273227",
	I1105 18:37:38.857353   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857359   44959 command_runner.go:130] >       "username": "nonroot",
	I1105 18:37:38.857363   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857369   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857372   44959 command_runner.go:130] >     },
	I1105 18:37:38.857376   44959 command_runner.go:130] >     {
	I1105 18:37:38.857382   44959 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1105 18:37:38.857388   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857393   44959 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1105 18:37:38.857399   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857402   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857409   44959 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1105 18:37:38.857418   44959 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1105 18:37:38.857422   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857426   44959 command_runner.go:130] >       "size": "149009664",
	I1105 18:37:38.857433   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857436   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.857442   44959 command_runner.go:130] >       },
	I1105 18:37:38.857448   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857451   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857455   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857459   44959 command_runner.go:130] >     },
	I1105 18:37:38.857461   44959 command_runner.go:130] >     {
	I1105 18:37:38.857467   44959 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1105 18:37:38.857473   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857478   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1105 18:37:38.857484   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857488   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857495   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1105 18:37:38.857509   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1105 18:37:38.857514   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857523   44959 command_runner.go:130] >       "size": "95274464",
	I1105 18:37:38.857529   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857533   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.857536   44959 command_runner.go:130] >       },
	I1105 18:37:38.857545   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857551   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857555   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857558   44959 command_runner.go:130] >     },
	I1105 18:37:38.857568   44959 command_runner.go:130] >     {
	I1105 18:37:38.857577   44959 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1105 18:37:38.857581   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857588   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1105 18:37:38.857591   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857595   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857613   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1105 18:37:38.857624   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1105 18:37:38.857627   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857654   44959 command_runner.go:130] >       "size": "89474374",
	I1105 18:37:38.857661   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857665   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.857668   44959 command_runner.go:130] >       },
	I1105 18:37:38.857672   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857675   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857679   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857682   44959 command_runner.go:130] >     },
	I1105 18:37:38.857685   44959 command_runner.go:130] >     {
	I1105 18:37:38.857691   44959 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1105 18:37:38.857697   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857702   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1105 18:37:38.857707   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857711   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857718   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1105 18:37:38.857729   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1105 18:37:38.857737   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857744   44959 command_runner.go:130] >       "size": "92783513",
	I1105 18:37:38.857747   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857751   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857754   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857758   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857761   44959 command_runner.go:130] >     },
	I1105 18:37:38.857765   44959 command_runner.go:130] >     {
	I1105 18:37:38.857773   44959 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1105 18:37:38.857777   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857784   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1105 18:37:38.857787   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857791   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857798   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1105 18:37:38.857807   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1105 18:37:38.857810   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857814   44959 command_runner.go:130] >       "size": "68457798",
	I1105 18:37:38.857818   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857821   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.857825   44959 command_runner.go:130] >       },
	I1105 18:37:38.857829   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857832   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857836   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857839   44959 command_runner.go:130] >     },
	I1105 18:37:38.857843   44959 command_runner.go:130] >     {
	I1105 18:37:38.857849   44959 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1105 18:37:38.857853   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857858   44959 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1105 18:37:38.857863   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857867   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857873   44959 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1105 18:37:38.857882   44959 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1105 18:37:38.857886   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857898   44959 command_runner.go:130] >       "size": "742080",
	I1105 18:37:38.857905   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857909   44959 command_runner.go:130] >         "value": "65535"
	I1105 18:37:38.857912   44959 command_runner.go:130] >       },
	I1105 18:37:38.857915   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857919   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857923   44959 command_runner.go:130] >       "pinned": true
	I1105 18:37:38.857926   44959 command_runner.go:130] >     }
	I1105 18:37:38.857929   44959 command_runner.go:130] >   ]
	I1105 18:37:38.857932   44959 command_runner.go:130] > }
	I1105 18:37:38.858401   44959 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:37:38.858422   44959 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:37:38.858433   44959 kubeadm.go:934] updating node { 192.168.39.235 8443 v1.31.2 crio true true} ...
	I1105 18:37:38.858567   44959 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-501442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-501442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:37:38.858649   44959 ssh_runner.go:195] Run: crio config
	I1105 18:37:38.890961   44959 command_runner.go:130] ! time="2024-11-05 18:37:38.866378564Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1105 18:37:38.898167   44959 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1105 18:37:38.908853   44959 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1105 18:37:38.908878   44959 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1105 18:37:38.908887   44959 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1105 18:37:38.908892   44959 command_runner.go:130] > #
	I1105 18:37:38.908901   44959 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1105 18:37:38.908910   44959 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1105 18:37:38.908920   44959 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1105 18:37:38.908931   44959 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1105 18:37:38.908937   44959 command_runner.go:130] > # reload'.
	I1105 18:37:38.908948   44959 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1105 18:37:38.908961   44959 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1105 18:37:38.908972   44959 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1105 18:37:38.908985   44959 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1105 18:37:38.908992   44959 command_runner.go:130] > [crio]
	I1105 18:37:38.909003   44959 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1105 18:37:38.909014   44959 command_runner.go:130] > # containers images, in this directory.
	I1105 18:37:38.909022   44959 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1105 18:37:38.909046   44959 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1105 18:37:38.909062   44959 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1105 18:37:38.909074   44959 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1105 18:37:38.909082   44959 command_runner.go:130] > # imagestore = ""
	I1105 18:37:38.909093   44959 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1105 18:37:38.909106   44959 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1105 18:37:38.909115   44959 command_runner.go:130] > storage_driver = "overlay"
	I1105 18:37:38.909126   44959 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1105 18:37:38.909140   44959 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1105 18:37:38.909150   44959 command_runner.go:130] > storage_option = [
	I1105 18:37:38.909181   44959 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1105 18:37:38.909194   44959 command_runner.go:130] > ]
	I1105 18:37:38.909215   44959 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1105 18:37:38.909229   44959 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1105 18:37:38.909240   44959 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1105 18:37:38.909251   44959 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1105 18:37:38.909264   44959 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1105 18:37:38.909275   44959 command_runner.go:130] > # always happen on a node reboot
	I1105 18:37:38.909285   44959 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1105 18:37:38.909302   44959 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1105 18:37:38.909317   44959 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1105 18:37:38.909327   44959 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1105 18:37:38.909339   44959 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1105 18:37:38.909355   44959 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1105 18:37:38.909371   44959 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1105 18:37:38.909382   44959 command_runner.go:130] > # internal_wipe = true
	I1105 18:37:38.909398   44959 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1105 18:37:38.909410   44959 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1105 18:37:38.909420   44959 command_runner.go:130] > # internal_repair = false
	I1105 18:37:38.909432   44959 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1105 18:37:38.909446   44959 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1105 18:37:38.909458   44959 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1105 18:37:38.909470   44959 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1105 18:37:38.909483   44959 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1105 18:37:38.909491   44959 command_runner.go:130] > [crio.api]
	I1105 18:37:38.909500   44959 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1105 18:37:38.909511   44959 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1105 18:37:38.909521   44959 command_runner.go:130] > # IP address on which the stream server will listen.
	I1105 18:37:38.909532   44959 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1105 18:37:38.909544   44959 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1105 18:37:38.909556   44959 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1105 18:37:38.909565   44959 command_runner.go:130] > # stream_port = "0"
	I1105 18:37:38.909576   44959 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1105 18:37:38.909587   44959 command_runner.go:130] > # stream_enable_tls = false
	I1105 18:37:38.909600   44959 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1105 18:37:38.909613   44959 command_runner.go:130] > # stream_idle_timeout = ""
	I1105 18:37:38.909630   44959 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1105 18:37:38.909643   44959 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1105 18:37:38.909652   44959 command_runner.go:130] > # minutes.
	I1105 18:37:38.909659   44959 command_runner.go:130] > # stream_tls_cert = ""
	I1105 18:37:38.909676   44959 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1105 18:37:38.909688   44959 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1105 18:37:38.909696   44959 command_runner.go:130] > # stream_tls_key = ""
	I1105 18:37:38.909716   44959 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1105 18:37:38.909729   44959 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1105 18:37:38.909758   44959 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1105 18:37:38.909767   44959 command_runner.go:130] > # stream_tls_ca = ""
	I1105 18:37:38.909780   44959 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1105 18:37:38.909790   44959 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1105 18:37:38.909805   44959 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1105 18:37:38.909816   44959 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1105 18:37:38.909829   44959 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1105 18:37:38.909842   44959 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1105 18:37:38.909850   44959 command_runner.go:130] > [crio.runtime]
	I1105 18:37:38.909860   44959 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1105 18:37:38.909873   44959 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1105 18:37:38.909883   44959 command_runner.go:130] > # "nofile=1024:2048"
	I1105 18:37:38.909895   44959 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1105 18:37:38.909904   44959 command_runner.go:130] > # default_ulimits = [
	I1105 18:37:38.909911   44959 command_runner.go:130] > # ]
	I1105 18:37:38.909923   44959 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1105 18:37:38.909932   44959 command_runner.go:130] > # no_pivot = false
	I1105 18:37:38.909943   44959 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1105 18:37:38.909956   44959 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1105 18:37:38.909967   44959 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1105 18:37:38.909978   44959 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1105 18:37:38.909989   44959 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1105 18:37:38.910001   44959 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1105 18:37:38.910019   44959 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1105 18:37:38.910030   44959 command_runner.go:130] > # Cgroup setting for conmon
	I1105 18:37:38.910044   44959 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1105 18:37:38.910054   44959 command_runner.go:130] > conmon_cgroup = "pod"
	I1105 18:37:38.910067   44959 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1105 18:37:38.910078   44959 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1105 18:37:38.910099   44959 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1105 18:37:38.910108   44959 command_runner.go:130] > conmon_env = [
	I1105 18:37:38.910119   44959 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1105 18:37:38.910126   44959 command_runner.go:130] > ]
	I1105 18:37:38.910136   44959 command_runner.go:130] > # Additional environment variables to set for all the
	I1105 18:37:38.910148   44959 command_runner.go:130] > # containers. These are overridden if set in the
	I1105 18:37:38.910161   44959 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1105 18:37:38.910170   44959 command_runner.go:130] > # default_env = [
	I1105 18:37:38.910176   44959 command_runner.go:130] > # ]
	I1105 18:37:38.910189   44959 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1105 18:37:38.910204   44959 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1105 18:37:38.910215   44959 command_runner.go:130] > # selinux = false
	I1105 18:37:38.910229   44959 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1105 18:37:38.910253   44959 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1105 18:37:38.910266   44959 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1105 18:37:38.910276   44959 command_runner.go:130] > # seccomp_profile = ""
	I1105 18:37:38.910296   44959 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1105 18:37:38.910309   44959 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1105 18:37:38.910320   44959 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1105 18:37:38.910330   44959 command_runner.go:130] > # which might increase security.
	I1105 18:37:38.910339   44959 command_runner.go:130] > # This option is currently deprecated,
	I1105 18:37:38.910352   44959 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1105 18:37:38.910363   44959 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1105 18:37:38.910375   44959 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1105 18:37:38.910388   44959 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1105 18:37:38.910401   44959 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1105 18:37:38.910412   44959 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1105 18:37:38.910430   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.910440   44959 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1105 18:37:38.910453   44959 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1105 18:37:38.910464   44959 command_runner.go:130] > # the cgroup blockio controller.
	I1105 18:37:38.910472   44959 command_runner.go:130] > # blockio_config_file = ""
	I1105 18:37:38.910484   44959 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1105 18:37:38.910493   44959 command_runner.go:130] > # blockio parameters.
	I1105 18:37:38.910502   44959 command_runner.go:130] > # blockio_reload = false
	I1105 18:37:38.910516   44959 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1105 18:37:38.910525   44959 command_runner.go:130] > # irqbalance daemon.
	I1105 18:37:38.910537   44959 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1105 18:37:38.910553   44959 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1105 18:37:38.910567   44959 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1105 18:37:38.910582   44959 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1105 18:37:38.910595   44959 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1105 18:37:38.910610   44959 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1105 18:37:38.910622   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.910633   44959 command_runner.go:130] > # rdt_config_file = ""
	I1105 18:37:38.910645   44959 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1105 18:37:38.910654   44959 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1105 18:37:38.910695   44959 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1105 18:37:38.910710   44959 command_runner.go:130] > # separate_pull_cgroup = ""
	I1105 18:37:38.910724   44959 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1105 18:37:38.910736   44959 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1105 18:37:38.910745   44959 command_runner.go:130] > # will be added.
	I1105 18:37:38.910753   44959 command_runner.go:130] > # default_capabilities = [
	I1105 18:37:38.910762   44959 command_runner.go:130] > # 	"CHOWN",
	I1105 18:37:38.910770   44959 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1105 18:37:38.910778   44959 command_runner.go:130] > # 	"FSETID",
	I1105 18:37:38.910785   44959 command_runner.go:130] > # 	"FOWNER",
	I1105 18:37:38.910791   44959 command_runner.go:130] > # 	"SETGID",
	I1105 18:37:38.910799   44959 command_runner.go:130] > # 	"SETUID",
	I1105 18:37:38.910808   44959 command_runner.go:130] > # 	"SETPCAP",
	I1105 18:37:38.910822   44959 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1105 18:37:38.910831   44959 command_runner.go:130] > # 	"KILL",
	I1105 18:37:38.910837   44959 command_runner.go:130] > # ]
	I1105 18:37:38.910850   44959 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1105 18:37:38.910863   44959 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1105 18:37:38.910874   44959 command_runner.go:130] > # add_inheritable_capabilities = false
	I1105 18:37:38.910886   44959 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1105 18:37:38.910899   44959 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1105 18:37:38.910909   44959 command_runner.go:130] > default_sysctls = [
	I1105 18:37:38.910919   44959 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1105 18:37:38.910927   44959 command_runner.go:130] > ]
	I1105 18:37:38.910935   44959 command_runner.go:130] > # List of devices on the host that a
	I1105 18:37:38.910949   44959 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1105 18:37:38.910957   44959 command_runner.go:130] > # allowed_devices = [
	I1105 18:37:38.910964   44959 command_runner.go:130] > # 	"/dev/fuse",
	I1105 18:37:38.910986   44959 command_runner.go:130] > # ]
	I1105 18:37:38.910997   44959 command_runner.go:130] > # List of additional devices. specified as
	I1105 18:37:38.911012   44959 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1105 18:37:38.911023   44959 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1105 18:37:38.911038   44959 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1105 18:37:38.911047   44959 command_runner.go:130] > # additional_devices = [
	I1105 18:37:38.911053   44959 command_runner.go:130] > # ]
	I1105 18:37:38.911063   44959 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1105 18:37:38.911076   44959 command_runner.go:130] > # cdi_spec_dirs = [
	I1105 18:37:38.911084   44959 command_runner.go:130] > # 	"/etc/cdi",
	I1105 18:37:38.911092   44959 command_runner.go:130] > # 	"/var/run/cdi",
	I1105 18:37:38.911099   44959 command_runner.go:130] > # ]
	I1105 18:37:38.911111   44959 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1105 18:37:38.911124   44959 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1105 18:37:38.911133   44959 command_runner.go:130] > # Defaults to false.
	I1105 18:37:38.911142   44959 command_runner.go:130] > # device_ownership_from_security_context = false
	I1105 18:37:38.911153   44959 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1105 18:37:38.911166   44959 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1105 18:37:38.911184   44959 command_runner.go:130] > # hooks_dir = [
	I1105 18:37:38.911195   44959 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1105 18:37:38.911201   44959 command_runner.go:130] > # ]
	I1105 18:37:38.911213   44959 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1105 18:37:38.911226   44959 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1105 18:37:38.911235   44959 command_runner.go:130] > # its default mounts from the following two files:
	I1105 18:37:38.911243   44959 command_runner.go:130] > #
	I1105 18:37:38.911254   44959 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1105 18:37:38.911267   44959 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1105 18:37:38.911279   44959 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1105 18:37:38.911287   44959 command_runner.go:130] > #
	I1105 18:37:38.911298   44959 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1105 18:37:38.911311   44959 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1105 18:37:38.911324   44959 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1105 18:37:38.911335   44959 command_runner.go:130] > #      only add mounts it finds in this file.
	I1105 18:37:38.911343   44959 command_runner.go:130] > #
	I1105 18:37:38.911351   44959 command_runner.go:130] > # default_mounts_file = ""
	I1105 18:37:38.911363   44959 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1105 18:37:38.911377   44959 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1105 18:37:38.911387   44959 command_runner.go:130] > pids_limit = 1024
	I1105 18:37:38.911400   44959 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1105 18:37:38.911413   44959 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1105 18:37:38.911427   44959 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1105 18:37:38.911442   44959 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1105 18:37:38.911452   44959 command_runner.go:130] > # log_size_max = -1
	I1105 18:37:38.911467   44959 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1105 18:37:38.911476   44959 command_runner.go:130] > # log_to_journald = false
	I1105 18:37:38.911487   44959 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1105 18:37:38.911498   44959 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1105 18:37:38.911510   44959 command_runner.go:130] > # Path to directory for container attach sockets.
	I1105 18:37:38.911521   44959 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1105 18:37:38.911532   44959 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1105 18:37:38.911542   44959 command_runner.go:130] > # bind_mount_prefix = ""
	I1105 18:37:38.911562   44959 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1105 18:37:38.911572   44959 command_runner.go:130] > # read_only = false
	I1105 18:37:38.911582   44959 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1105 18:37:38.911595   44959 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1105 18:37:38.911605   44959 command_runner.go:130] > # live configuration reload.
	I1105 18:37:38.911615   44959 command_runner.go:130] > # log_level = "info"
	I1105 18:37:38.911625   44959 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1105 18:37:38.911636   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.911643   44959 command_runner.go:130] > # log_filter = ""
	I1105 18:37:38.911656   44959 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1105 18:37:38.911671   44959 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1105 18:37:38.911681   44959 command_runner.go:130] > # separated by comma.
	I1105 18:37:38.911696   44959 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1105 18:37:38.911711   44959 command_runner.go:130] > # uid_mappings = ""
	I1105 18:37:38.911724   44959 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1105 18:37:38.911741   44959 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1105 18:37:38.911751   44959 command_runner.go:130] > # separated by comma.
	I1105 18:37:38.911766   44959 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1105 18:37:38.911776   44959 command_runner.go:130] > # gid_mappings = ""
	I1105 18:37:38.911790   44959 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1105 18:37:38.911802   44959 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1105 18:37:38.911816   44959 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1105 18:37:38.911831   44959 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1105 18:37:38.911841   44959 command_runner.go:130] > # minimum_mappable_uid = -1
	I1105 18:37:38.911853   44959 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1105 18:37:38.911865   44959 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1105 18:37:38.911878   44959 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1105 18:37:38.911895   44959 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1105 18:37:38.911910   44959 command_runner.go:130] > # minimum_mappable_gid = -1
	I1105 18:37:38.911924   44959 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1105 18:37:38.911937   44959 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1105 18:37:38.911950   44959 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1105 18:37:38.911959   44959 command_runner.go:130] > # ctr_stop_timeout = 30
	I1105 18:37:38.911975   44959 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1105 18:37:38.911987   44959 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1105 18:37:38.911997   44959 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1105 18:37:38.912008   44959 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1105 18:37:38.912017   44959 command_runner.go:130] > drop_infra_ctr = false
	I1105 18:37:38.912029   44959 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1105 18:37:38.912042   44959 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1105 18:37:38.912057   44959 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1105 18:37:38.912066   44959 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1105 18:37:38.912080   44959 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1105 18:37:38.912093   44959 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1105 18:37:38.912105   44959 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1105 18:37:38.912117   44959 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1105 18:37:38.912126   44959 command_runner.go:130] > # shared_cpuset = ""
	I1105 18:37:38.912137   44959 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1105 18:37:38.912148   44959 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1105 18:37:38.912156   44959 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1105 18:37:38.912170   44959 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1105 18:37:38.912180   44959 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1105 18:37:38.912193   44959 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1105 18:37:38.912207   44959 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1105 18:37:38.912217   44959 command_runner.go:130] > # enable_criu_support = false
	I1105 18:37:38.912228   44959 command_runner.go:130] > # Enable/disable the generation of the container,
	I1105 18:37:38.912241   44959 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1105 18:37:38.912252   44959 command_runner.go:130] > # enable_pod_events = false
	I1105 18:37:38.912265   44959 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1105 18:37:38.912279   44959 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1105 18:37:38.912290   44959 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1105 18:37:38.912300   44959 command_runner.go:130] > # default_runtime = "runc"
	I1105 18:37:38.912310   44959 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1105 18:37:38.912323   44959 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1105 18:37:38.912342   44959 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1105 18:37:38.912356   44959 command_runner.go:130] > # creation as a file is not desired either.
	I1105 18:37:38.912378   44959 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1105 18:37:38.912390   44959 command_runner.go:130] > # the hostname is being managed dynamically.
	I1105 18:37:38.912399   44959 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1105 18:37:38.912407   44959 command_runner.go:130] > # ]
	I1105 18:37:38.912419   44959 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1105 18:37:38.912432   44959 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1105 18:37:38.912445   44959 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1105 18:37:38.912457   44959 command_runner.go:130] > # Each entry in the table should follow the format:
	I1105 18:37:38.912464   44959 command_runner.go:130] > #
	I1105 18:37:38.912473   44959 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1105 18:37:38.912483   44959 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1105 18:37:38.912998   44959 command_runner.go:130] > # runtime_type = "oci"
	I1105 18:37:38.913025   44959 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1105 18:37:38.913035   44959 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1105 18:37:38.913042   44959 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1105 18:37:38.913056   44959 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1105 18:37:38.913062   44959 command_runner.go:130] > # monitor_env = []
	I1105 18:37:38.913069   44959 command_runner.go:130] > # privileged_without_host_devices = false
	I1105 18:37:38.913075   44959 command_runner.go:130] > # allowed_annotations = []
	I1105 18:37:38.913088   44959 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1105 18:37:38.913098   44959 command_runner.go:130] > # Where:
	I1105 18:37:38.913107   44959 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1105 18:37:38.913117   44959 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1105 18:37:38.913132   44959 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1105 18:37:38.913141   44959 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1105 18:37:38.913147   44959 command_runner.go:130] > #   in $PATH.
	I1105 18:37:38.913162   44959 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1105 18:37:38.913170   44959 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1105 18:37:38.913179   44959 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1105 18:37:38.913185   44959 command_runner.go:130] > #   state.
	I1105 18:37:38.913200   44959 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1105 18:37:38.913209   44959 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1105 18:37:38.913223   44959 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1105 18:37:38.913238   44959 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1105 18:37:38.913247   44959 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1105 18:37:38.913262   44959 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1105 18:37:38.913273   44959 command_runner.go:130] > #   The currently recognized values are:
	I1105 18:37:38.913282   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1105 18:37:38.913299   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1105 18:37:38.913308   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1105 18:37:38.913322   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1105 18:37:38.913334   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1105 18:37:38.913343   44959 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1105 18:37:38.913358   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1105 18:37:38.913368   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1105 18:37:38.913383   44959 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1105 18:37:38.913392   44959 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1105 18:37:38.913401   44959 command_runner.go:130] > #   deprecated option "conmon".
	I1105 18:37:38.913418   44959 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1105 18:37:38.913426   44959 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1105 18:37:38.913436   44959 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1105 18:37:38.913448   44959 command_runner.go:130] > #   should be moved to the container's cgroup
	I1105 18:37:38.913458   44959 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1105 18:37:38.913467   44959 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1105 18:37:38.913482   44959 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1105 18:37:38.913490   44959 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1105 18:37:38.913496   44959 command_runner.go:130] > #
	I1105 18:37:38.913503   44959 command_runner.go:130] > # Using the seccomp notifier feature:
	I1105 18:37:38.913513   44959 command_runner.go:130] > #
	I1105 18:37:38.913522   44959 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1105 18:37:38.913532   44959 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1105 18:37:38.913537   44959 command_runner.go:130] > #
	I1105 18:37:38.913551   44959 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1105 18:37:38.913561   44959 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1105 18:37:38.913565   44959 command_runner.go:130] > #
	I1105 18:37:38.913579   44959 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1105 18:37:38.913591   44959 command_runner.go:130] > # feature.
	I1105 18:37:38.913595   44959 command_runner.go:130] > #
	I1105 18:37:38.913604   44959 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1105 18:37:38.913618   44959 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1105 18:37:38.913627   44959 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1105 18:37:38.913642   44959 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1105 18:37:38.913656   44959 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1105 18:37:38.913667   44959 command_runner.go:130] > #
	I1105 18:37:38.913677   44959 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1105 18:37:38.913691   44959 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1105 18:37:38.913701   44959 command_runner.go:130] > #
	I1105 18:37:38.913719   44959 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1105 18:37:38.913734   44959 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1105 18:37:38.913738   44959 command_runner.go:130] > #
	I1105 18:37:38.913748   44959 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1105 18:37:38.913757   44959 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1105 18:37:38.913762   44959 command_runner.go:130] > # limitation.
	I1105 18:37:38.913777   44959 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1105 18:37:38.913784   44959 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1105 18:37:38.913792   44959 command_runner.go:130] > runtime_type = "oci"
	I1105 18:37:38.913821   44959 command_runner.go:130] > runtime_root = "/run/runc"
	I1105 18:37:38.913857   44959 command_runner.go:130] > runtime_config_path = ""
	I1105 18:37:38.913870   44959 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1105 18:37:38.913877   44959 command_runner.go:130] > monitor_cgroup = "pod"
	I1105 18:37:38.913884   44959 command_runner.go:130] > monitor_exec_cgroup = ""
	I1105 18:37:38.913896   44959 command_runner.go:130] > monitor_env = [
	I1105 18:37:38.913908   44959 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1105 18:37:38.913912   44959 command_runner.go:130] > ]
	I1105 18:37:38.913919   44959 command_runner.go:130] > privileged_without_host_devices = false
	I1105 18:37:38.913937   44959 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1105 18:37:38.913950   44959 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1105 18:37:38.913966   44959 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1105 18:37:38.914009   44959 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1105 18:37:38.914017   44959 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1105 18:37:38.914026   44959 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1105 18:37:38.914037   44959 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1105 18:37:38.914073   44959 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1105 18:37:38.914080   44959 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1105 18:37:38.914358   44959 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1105 18:37:38.914378   44959 command_runner.go:130] > # Example:
	I1105 18:37:38.914387   44959 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1105 18:37:38.914395   44959 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1105 18:37:38.914410   44959 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1105 18:37:38.914421   44959 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1105 18:37:38.914429   44959 command_runner.go:130] > # cpuset = 0
	I1105 18:37:38.914537   44959 command_runner.go:130] > # cpushares = "0-1"
	I1105 18:37:38.914559   44959 command_runner.go:130] > # Where:
	I1105 18:37:38.914567   44959 command_runner.go:130] > # The workload name is workload-type.
	I1105 18:37:38.914578   44959 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1105 18:37:38.914590   44959 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1105 18:37:38.914603   44959 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1105 18:37:38.914619   44959 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1105 18:37:38.914633   44959 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1105 18:37:38.914644   44959 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1105 18:37:38.914658   44959 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1105 18:37:38.914669   44959 command_runner.go:130] > # Default value is set to true
	I1105 18:37:38.914677   44959 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1105 18:37:38.914688   44959 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1105 18:37:38.914696   44959 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1105 18:37:38.914701   44959 command_runner.go:130] > # Default value is set to 'false'
	I1105 18:37:38.914707   44959 command_runner.go:130] > # disable_hostport_mapping = false
	I1105 18:37:38.914714   44959 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1105 18:37:38.914719   44959 command_runner.go:130] > #
	I1105 18:37:38.914725   44959 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1105 18:37:38.914733   44959 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1105 18:37:38.914742   44959 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1105 18:37:38.914750   44959 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1105 18:37:38.914758   44959 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1105 18:37:38.914762   44959 command_runner.go:130] > [crio.image]
	I1105 18:37:38.914771   44959 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1105 18:37:38.914778   44959 command_runner.go:130] > # default_transport = "docker://"
	I1105 18:37:38.914784   44959 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1105 18:37:38.914793   44959 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1105 18:37:38.914799   44959 command_runner.go:130] > # global_auth_file = ""
	I1105 18:37:38.914805   44959 command_runner.go:130] > # The image used to instantiate infra containers.
	I1105 18:37:38.914812   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.914816   44959 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1105 18:37:38.914825   44959 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1105 18:37:38.914833   44959 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1105 18:37:38.914839   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.914852   44959 command_runner.go:130] > # pause_image_auth_file = ""
	I1105 18:37:38.914860   44959 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1105 18:37:38.914866   44959 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1105 18:37:38.914874   44959 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1105 18:37:38.914882   44959 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1105 18:37:38.914886   44959 command_runner.go:130] > # pause_command = "/pause"
	I1105 18:37:38.914902   44959 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1105 18:37:38.914910   44959 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1105 18:37:38.914918   44959 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1105 18:37:38.914931   44959 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1105 18:37:38.914939   44959 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1105 18:37:38.914951   44959 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1105 18:37:38.914960   44959 command_runner.go:130] > # pinned_images = [
	I1105 18:37:38.914964   44959 command_runner.go:130] > # ]
	I1105 18:37:38.914988   44959 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1105 18:37:38.915002   44959 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1105 18:37:38.915015   44959 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1105 18:37:38.915029   44959 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1105 18:37:38.915039   44959 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1105 18:37:38.915045   44959 command_runner.go:130] > # signature_policy = ""
	I1105 18:37:38.915054   44959 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1105 18:37:38.915067   44959 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1105 18:37:38.915077   44959 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1105 18:37:38.915083   44959 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1105 18:37:38.915099   44959 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1105 18:37:38.915106   44959 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1105 18:37:38.915112   44959 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1105 18:37:38.915121   44959 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1105 18:37:38.915127   44959 command_runner.go:130] > # changing them here.
	I1105 18:37:38.915131   44959 command_runner.go:130] > # insecure_registries = [
	I1105 18:37:38.915136   44959 command_runner.go:130] > # ]
	I1105 18:37:38.915142   44959 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1105 18:37:38.915150   44959 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1105 18:37:38.915157   44959 command_runner.go:130] > # image_volumes = "mkdir"
	I1105 18:37:38.915161   44959 command_runner.go:130] > # Temporary directory to use for storing big files
	I1105 18:37:38.915168   44959 command_runner.go:130] > # big_files_temporary_dir = ""
	I1105 18:37:38.915176   44959 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1105 18:37:38.915182   44959 command_runner.go:130] > # CNI plugins.
	I1105 18:37:38.915186   44959 command_runner.go:130] > [crio.network]
	I1105 18:37:38.915194   44959 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1105 18:37:38.915202   44959 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1105 18:37:38.915206   44959 command_runner.go:130] > # cni_default_network = ""
	I1105 18:37:38.915214   44959 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1105 18:37:38.915220   44959 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1105 18:37:38.915225   44959 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1105 18:37:38.915231   44959 command_runner.go:130] > # plugin_dirs = [
	I1105 18:37:38.915235   44959 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1105 18:37:38.915241   44959 command_runner.go:130] > # ]
	I1105 18:37:38.915246   44959 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1105 18:37:38.915252   44959 command_runner.go:130] > [crio.metrics]
	I1105 18:37:38.915257   44959 command_runner.go:130] > # Globally enable or disable metrics support.
	I1105 18:37:38.915260   44959 command_runner.go:130] > enable_metrics = true
	I1105 18:37:38.915266   44959 command_runner.go:130] > # Specify enabled metrics collectors.
	I1105 18:37:38.915273   44959 command_runner.go:130] > # Per default all metrics are enabled.
	I1105 18:37:38.915279   44959 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1105 18:37:38.915288   44959 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1105 18:37:38.915296   44959 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1105 18:37:38.915302   44959 command_runner.go:130] > # metrics_collectors = [
	I1105 18:37:38.915306   44959 command_runner.go:130] > # 	"operations",
	I1105 18:37:38.915312   44959 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1105 18:37:38.915317   44959 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1105 18:37:38.915321   44959 command_runner.go:130] > # 	"operations_errors",
	I1105 18:37:38.915326   44959 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1105 18:37:38.915330   44959 command_runner.go:130] > # 	"image_pulls_by_name",
	I1105 18:37:38.915336   44959 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1105 18:37:38.915340   44959 command_runner.go:130] > # 	"image_pulls_failures",
	I1105 18:37:38.915344   44959 command_runner.go:130] > # 	"image_pulls_successes",
	I1105 18:37:38.915351   44959 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1105 18:37:38.915355   44959 command_runner.go:130] > # 	"image_layer_reuse",
	I1105 18:37:38.915362   44959 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1105 18:37:38.915366   44959 command_runner.go:130] > # 	"containers_oom_total",
	I1105 18:37:38.915372   44959 command_runner.go:130] > # 	"containers_oom",
	I1105 18:37:38.915376   44959 command_runner.go:130] > # 	"processes_defunct",
	I1105 18:37:38.915382   44959 command_runner.go:130] > # 	"operations_total",
	I1105 18:37:38.915386   44959 command_runner.go:130] > # 	"operations_latency_seconds",
	I1105 18:37:38.915393   44959 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1105 18:37:38.915397   44959 command_runner.go:130] > # 	"operations_errors_total",
	I1105 18:37:38.915403   44959 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1105 18:37:38.915408   44959 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1105 18:37:38.915414   44959 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1105 18:37:38.915418   44959 command_runner.go:130] > # 	"image_pulls_success_total",
	I1105 18:37:38.915430   44959 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1105 18:37:38.915437   44959 command_runner.go:130] > # 	"containers_oom_count_total",
	I1105 18:37:38.915441   44959 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1105 18:37:38.915448   44959 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1105 18:37:38.915451   44959 command_runner.go:130] > # ]
	I1105 18:37:38.915460   44959 command_runner.go:130] > # The port on which the metrics server will listen.
	I1105 18:37:38.915466   44959 command_runner.go:130] > # metrics_port = 9090
	I1105 18:37:38.915471   44959 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1105 18:37:38.915477   44959 command_runner.go:130] > # metrics_socket = ""
	I1105 18:37:38.915482   44959 command_runner.go:130] > # The certificate for the secure metrics server.
	I1105 18:37:38.915490   44959 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1105 18:37:38.915499   44959 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1105 18:37:38.915504   44959 command_runner.go:130] > # certificate on any modification event.
	I1105 18:37:38.915510   44959 command_runner.go:130] > # metrics_cert = ""
	I1105 18:37:38.915515   44959 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1105 18:37:38.915522   44959 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1105 18:37:38.915526   44959 command_runner.go:130] > # metrics_key = ""
	I1105 18:37:38.915534   44959 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1105 18:37:38.915541   44959 command_runner.go:130] > [crio.tracing]
	I1105 18:37:38.915546   44959 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1105 18:37:38.915552   44959 command_runner.go:130] > # enable_tracing = false
	I1105 18:37:38.915558   44959 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1105 18:37:38.915565   44959 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1105 18:37:38.915573   44959 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1105 18:37:38.915584   44959 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1105 18:37:38.915591   44959 command_runner.go:130] > # CRI-O NRI configuration.
	I1105 18:37:38.915595   44959 command_runner.go:130] > [crio.nri]
	I1105 18:37:38.915599   44959 command_runner.go:130] > # Globally enable or disable NRI.
	I1105 18:37:38.915605   44959 command_runner.go:130] > # enable_nri = false
	I1105 18:37:38.915610   44959 command_runner.go:130] > # NRI socket to listen on.
	I1105 18:37:38.915616   44959 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1105 18:37:38.915620   44959 command_runner.go:130] > # NRI plugin directory to use.
	I1105 18:37:38.915625   44959 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1105 18:37:38.915630   44959 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1105 18:37:38.915637   44959 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1105 18:37:38.915642   44959 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1105 18:37:38.915651   44959 command_runner.go:130] > # nri_disable_connections = false
	I1105 18:37:38.915659   44959 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1105 18:37:38.915664   44959 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1105 18:37:38.915671   44959 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1105 18:37:38.915676   44959 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1105 18:37:38.915691   44959 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1105 18:37:38.915701   44959 command_runner.go:130] > [crio.stats]
	I1105 18:37:38.915709   44959 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1105 18:37:38.915715   44959 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1105 18:37:38.915721   44959 command_runner.go:130] > # stats_collection_period = 0
	I1105 18:37:38.915791   44959 cni.go:84] Creating CNI manager for ""
	I1105 18:37:38.915804   44959 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 18:37:38.915814   44959 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:37:38.915836   44959 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-501442 NodeName:multinode-501442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:37:38.915953   44959 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-501442"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.235"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:37:38.916010   44959 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:37:38.926318   44959 command_runner.go:130] > kubeadm
	I1105 18:37:38.926341   44959 command_runner.go:130] > kubectl
	I1105 18:37:38.926347   44959 command_runner.go:130] > kubelet
	I1105 18:37:38.926389   44959 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:37:38.926447   44959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 18:37:38.936004   44959 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1105 18:37:38.951622   44959 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:37:38.967674   44959 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1105 18:37:38.982931   44959 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I1105 18:37:38.986588   44959 command_runner.go:130] > 192.168.39.235	control-plane.minikube.internal
	I1105 18:37:38.986667   44959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:37:39.128149   44959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:37:39.142448   44959 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442 for IP: 192.168.39.235
	I1105 18:37:39.142471   44959 certs.go:194] generating shared ca certs ...
	I1105 18:37:39.142485   44959 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:37:39.142621   44959 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:37:39.142658   44959 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:37:39.142671   44959 certs.go:256] generating profile certs ...
	I1105 18:37:39.142782   44959 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/client.key
	I1105 18:37:39.142842   44959 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.key.eff842b3
	I1105 18:37:39.142883   44959 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.key
	I1105 18:37:39.142894   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:37:39.142909   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:37:39.142922   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:37:39.142932   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:37:39.142944   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:37:39.142956   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:37:39.142985   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:37:39.143008   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:37:39.143078   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:37:39.143111   44959 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:37:39.143120   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:37:39.143140   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:37:39.143165   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:37:39.143186   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:37:39.143224   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:37:39.143248   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.143263   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.143275   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.143906   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:37:39.167141   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:37:39.189604   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:37:39.212310   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:37:39.234362   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 18:37:39.256019   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:37:39.277446   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:37:39.299055   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:37:39.321762   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:37:39.343625   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:37:39.364936   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:37:39.387370   44959 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:37:39.403453   44959 ssh_runner.go:195] Run: openssl version
	I1105 18:37:39.410196   44959 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1105 18:37:39.410301   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:37:39.421206   44959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.425419   44959 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.425482   44959 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.425533   44959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.430647   44959 command_runner.go:130] > b5213941
	I1105 18:37:39.430897   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:37:39.440438   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:37:39.450922   44959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.455078   44959 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.455105   44959 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.455150   44959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.460371   44959 command_runner.go:130] > 51391683
	I1105 18:37:39.460435   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:37:39.469482   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:37:39.480026   44959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.484154   44959 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.484280   44959 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.484335   44959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.489792   44959 command_runner.go:130] > 3ec20f2e
	I1105 18:37:39.489849   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:37:39.498719   44959 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:37:39.502725   44959 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:37:39.502761   44959 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1105 18:37:39.502771   44959 command_runner.go:130] > Device: 253,1	Inode: 5244462     Links: 1
	I1105 18:37:39.502783   44959 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1105 18:37:39.502789   44959 command_runner.go:130] > Access: 2024-11-05 18:30:59.480150353 +0000
	I1105 18:37:39.502796   44959 command_runner.go:130] > Modify: 2024-11-05 18:30:59.480150353 +0000
	I1105 18:37:39.502801   44959 command_runner.go:130] > Change: 2024-11-05 18:30:59.480150353 +0000
	I1105 18:37:39.502806   44959 command_runner.go:130] >  Birth: 2024-11-05 18:30:59.480150353 +0000
	I1105 18:37:39.502846   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 18:37:39.508231   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.508297   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 18:37:39.513472   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.513538   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 18:37:39.518848   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.518899   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 18:37:39.523761   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.523881   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 18:37:39.528893   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.529044   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 18:37:39.533964   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.534163   44959 kubeadm.go:392] StartCluster: {Name:multinode-501442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-501442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:37:39.534281   44959 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:37:39.534325   44959 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:37:39.569543   44959 command_runner.go:130] > ff2c842c433a37cd2e6ebecf01dccc56471a33a1b32dd128ede3a626dad85eae
	I1105 18:37:39.569573   44959 command_runner.go:130] > bda4c5ff9760f31549d67318c9231b3c270f281ab22d59acb512f7f543dd9f6e
	I1105 18:37:39.569583   44959 command_runner.go:130] > 8436bf7ad36acfe8556093d25a9b978f7f5ecf4f1f6cf4f595b10a00156c17df
	I1105 18:37:39.569595   44959 command_runner.go:130] > 12d7011690bfd50d49711ecadafa040173ac51c10ed10a77c3b01174eece06d0
	I1105 18:37:39.569605   44959 command_runner.go:130] > 5640c6ad72f610faa2987de91e3c26eb08f329dbeff15858c90987541499001a
	I1105 18:37:39.569614   44959 command_runner.go:130] > bcf0c4abf9bd5d335fcecc197fab96b31e98221619aa5a323415a55a38229f7c
	I1105 18:37:39.569622   44959 command_runner.go:130] > a633ece5a868ea38a983b5f7f9f64208bfe44221954702c308b47c4c6edff92f
	I1105 18:37:39.569637   44959 command_runner.go:130] > 7ee0a777d11270b8edce25900ac6246070ebe29c0ef97881366503b66f874f55
	I1105 18:37:39.569664   44959 cri.go:89] found id: "ff2c842c433a37cd2e6ebecf01dccc56471a33a1b32dd128ede3a626dad85eae"
	I1105 18:37:39.569676   44959 cri.go:89] found id: "bda4c5ff9760f31549d67318c9231b3c270f281ab22d59acb512f7f543dd9f6e"
	I1105 18:37:39.569681   44959 cri.go:89] found id: "8436bf7ad36acfe8556093d25a9b978f7f5ecf4f1f6cf4f595b10a00156c17df"
	I1105 18:37:39.569687   44959 cri.go:89] found id: "12d7011690bfd50d49711ecadafa040173ac51c10ed10a77c3b01174eece06d0"
	I1105 18:37:39.569691   44959 cri.go:89] found id: "5640c6ad72f610faa2987de91e3c26eb08f329dbeff15858c90987541499001a"
	I1105 18:37:39.569696   44959 cri.go:89] found id: "bcf0c4abf9bd5d335fcecc197fab96b31e98221619aa5a323415a55a38229f7c"
	I1105 18:37:39.569703   44959 cri.go:89] found id: "a633ece5a868ea38a983b5f7f9f64208bfe44221954702c308b47c4c6edff92f"
	I1105 18:37:39.569707   44959 cri.go:89] found id: "7ee0a777d11270b8edce25900ac6246070ebe29c0ef97881366503b66f874f55"
	I1105 18:37:39.569711   44959 cri.go:89] found id: ""
	I1105 18:37:39.569764   44959 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-501442 -n multinode-501442
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-501442 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (325.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 stop
E1105 18:40:34.489257   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-501442 stop: exit status 82 (2m0.469571239s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-501442-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-501442 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-501442 status: (18.724719721s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr: (3.355819138s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-501442 -n multinode-501442
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-501442 logs -n 25: (2.01322204s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m02:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442:/home/docker/cp-test_multinode-501442-m02_multinode-501442.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442 sudo cat                                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m02_multinode-501442.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m02:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03:/home/docker/cp-test_multinode-501442-m02_multinode-501442-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442-m03 sudo cat                                   | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m02_multinode-501442-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp testdata/cp-test.txt                                                | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3513316962/001/cp-test_multinode-501442-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442:/home/docker/cp-test_multinode-501442-m03_multinode-501442.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442 sudo cat                                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m03_multinode-501442.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt                       | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02:/home/docker/cp-test_multinode-501442-m03_multinode-501442-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442-m02 sudo cat                                   | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m03_multinode-501442-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-501442 node stop m03                                                          | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	| node    | multinode-501442 node start                                                             | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:34 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-501442                                                                | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:34 UTC |                     |
	| stop    | -p multinode-501442                                                                     | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:34 UTC |                     |
	| start   | -p multinode-501442                                                                     | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:36 UTC | 05 Nov 24 18:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-501442                                                                | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:39 UTC |                     |
	| node    | multinode-501442 node delete                                                            | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:39 UTC | 05 Nov 24 18:39 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-501442 stop                                                                   | multinode-501442 | jenkins | v1.34.0 | 05 Nov 24 18:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:36:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:36:02.962285   44959 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:36:02.962422   44959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:36:02.962431   44959 out.go:358] Setting ErrFile to fd 2...
	I1105 18:36:02.962435   44959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:36:02.962630   44959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:36:02.963250   44959 out.go:352] Setting JSON to false
	I1105 18:36:02.964143   44959 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4705,"bootTime":1730827058,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:36:02.964240   44959 start.go:139] virtualization: kvm guest
	I1105 18:36:02.966468   44959 out.go:177] * [multinode-501442] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:36:02.967768   44959 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:36:02.967793   44959 notify.go:220] Checking for updates...
	I1105 18:36:02.970165   44959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:36:02.971529   44959 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:36:02.972806   44959 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:36:02.974014   44959 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:36:02.975356   44959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:36:02.977032   44959 config.go:182] Loaded profile config "multinode-501442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:36:02.977150   44959 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:36:02.977620   44959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:36:02.977670   44959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:36:02.993248   44959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33587
	I1105 18:36:02.993835   44959 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:36:02.994481   44959 main.go:141] libmachine: Using API Version  1
	I1105 18:36:02.994503   44959 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:36:02.994899   44959 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:36:02.995125   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:36:03.032468   44959 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:36:03.033828   44959 start.go:297] selected driver: kvm2
	I1105 18:36:03.033844   44959 start.go:901] validating driver "kvm2" against &{Name:multinode-501442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-501442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:36:03.033998   44959 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:36:03.034326   44959 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:36:03.034411   44959 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:36:03.050322   44959 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:36:03.051286   44959 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:36:03.051330   44959 cni.go:84] Creating CNI manager for ""
	I1105 18:36:03.051394   44959 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 18:36:03.051467   44959 start.go:340] cluster config:
	{Name:multinode-501442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-501442 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:36:03.051646   44959 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:36:03.053774   44959 out.go:177] * Starting "multinode-501442" primary control-plane node in "multinode-501442" cluster
	I1105 18:36:03.055042   44959 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:36:03.055083   44959 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:36:03.055090   44959 cache.go:56] Caching tarball of preloaded images
	I1105 18:36:03.055178   44959 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:36:03.055192   44959 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:36:03.055367   44959 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/config.json ...
	I1105 18:36:03.055614   44959 start.go:360] acquireMachinesLock for multinode-501442: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:36:03.055658   44959 start.go:364] duration metric: took 23.718µs to acquireMachinesLock for "multinode-501442"
	I1105 18:36:03.055673   44959 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:36:03.055681   44959 fix.go:54] fixHost starting: 
	I1105 18:36:03.056036   44959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:36:03.056072   44959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:36:03.070656   44959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1105 18:36:03.071125   44959 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:36:03.071686   44959 main.go:141] libmachine: Using API Version  1
	I1105 18:36:03.071711   44959 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:36:03.072033   44959 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:36:03.072235   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:36:03.072397   44959 main.go:141] libmachine: (multinode-501442) Calling .GetState
	I1105 18:36:03.073915   44959 fix.go:112] recreateIfNeeded on multinode-501442: state=Running err=<nil>
	W1105 18:36:03.073937   44959 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:36:03.075985   44959 out.go:177] * Updating the running kvm2 "multinode-501442" VM ...
	I1105 18:36:03.077407   44959 machine.go:93] provisionDockerMachine start ...
	I1105 18:36:03.077432   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:36:03.077642   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.080113   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.080561   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.080595   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.080765   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.080930   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.081081   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.081192   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.081366   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:36:03.081568   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:36:03.081579   44959 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:36:03.195936   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-501442
	
	I1105 18:36:03.195976   44959 main.go:141] libmachine: (multinode-501442) Calling .GetMachineName
	I1105 18:36:03.196262   44959 buildroot.go:166] provisioning hostname "multinode-501442"
	I1105 18:36:03.196294   44959 main.go:141] libmachine: (multinode-501442) Calling .GetMachineName
	I1105 18:36:03.196544   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.199085   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.199492   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.199521   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.199695   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.199866   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.200045   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.200179   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.200362   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:36:03.200518   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:36:03.200528   44959 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-501442 && echo "multinode-501442" | sudo tee /etc/hostname
	I1105 18:36:03.330594   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-501442
	
	I1105 18:36:03.330627   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.333516   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.333915   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.333945   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.334087   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.334284   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.334496   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.334735   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.334932   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:36:03.335156   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:36:03.335174   44959 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-501442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-501442/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-501442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:36:03.452265   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:36:03.452300   44959 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:36:03.452351   44959 buildroot.go:174] setting up certificates
	I1105 18:36:03.452367   44959 provision.go:84] configureAuth start
	I1105 18:36:03.452380   44959 main.go:141] libmachine: (multinode-501442) Calling .GetMachineName
	I1105 18:36:03.452695   44959 main.go:141] libmachine: (multinode-501442) Calling .GetIP
	I1105 18:36:03.455502   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.455875   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.455906   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.456061   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.458330   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.458599   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.458633   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.458795   44959 provision.go:143] copyHostCerts
	I1105 18:36:03.458828   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:36:03.458870   44959 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:36:03.458886   44959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:36:03.458990   44959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:36:03.459110   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:36:03.459136   44959 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:36:03.459144   44959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:36:03.459187   44959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:36:03.459267   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:36:03.459300   44959 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:36:03.459310   44959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:36:03.459345   44959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:36:03.459432   44959 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.multinode-501442 san=[127.0.0.1 192.168.39.235 localhost minikube multinode-501442]
	I1105 18:36:03.627180   44959 provision.go:177] copyRemoteCerts
	I1105 18:36:03.627245   44959 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:36:03.627274   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.630165   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.630528   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.630562   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.630712   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.630932   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.631164   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.631291   44959 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442/id_rsa Username:docker}
	I1105 18:36:03.718061   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:36:03.718143   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:36:03.742436   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:36:03.742499   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1105 18:36:03.765895   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:36:03.765971   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:36:03.789139   44959 provision.go:87] duration metric: took 336.758403ms to configureAuth
	I1105 18:36:03.789167   44959 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:36:03.789375   44959 config.go:182] Loaded profile config "multinode-501442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:36:03.789445   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:36:03.792249   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.792547   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:36:03.792575   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:36:03.792764   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:36:03.792965   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.793162   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:36:03.793297   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:36:03.793433   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:36:03.793664   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:36:03.793685   44959 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:37:34.424465   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:37:34.424490   44959 machine.go:96] duration metric: took 1m31.347066615s to provisionDockerMachine
	I1105 18:37:34.424509   44959 start.go:293] postStartSetup for "multinode-501442" (driver="kvm2")
	I1105 18:37:34.424523   44959 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:37:34.424547   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.424857   44959 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:37:34.424905   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:37:34.428050   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.428503   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.428530   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.428785   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:37:34.428971   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.429120   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:37:34.429265   44959 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442/id_rsa Username:docker}
	I1105 18:37:34.518656   44959 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:37:34.522499   44959 command_runner.go:130] > NAME=Buildroot
	I1105 18:37:34.522522   44959 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1105 18:37:34.522529   44959 command_runner.go:130] > ID=buildroot
	I1105 18:37:34.522545   44959 command_runner.go:130] > VERSION_ID=2023.02.9
	I1105 18:37:34.522552   44959 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1105 18:37:34.522636   44959 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:37:34.522670   44959 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:37:34.522749   44959 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:37:34.522844   44959 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:37:34.522856   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /etc/ssl/certs/154922.pem
	I1105 18:37:34.522987   44959 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:37:34.532241   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:37:34.555002   44959 start.go:296] duration metric: took 130.47732ms for postStartSetup
	I1105 18:37:34.555058   44959 fix.go:56] duration metric: took 1m31.499375969s for fixHost
	I1105 18:37:34.555082   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:37:34.557816   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.558161   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.558184   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.558388   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:37:34.558582   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.558759   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.558892   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:37:34.559126   44959 main.go:141] libmachine: Using SSH client type: native
	I1105 18:37:34.559318   44959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:37:34.559335   44959 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:37:34.671504   44959 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730831854.647121236
	
	I1105 18:37:34.671533   44959 fix.go:216] guest clock: 1730831854.647121236
	I1105 18:37:34.671540   44959 fix.go:229] Guest: 2024-11-05 18:37:34.647121236 +0000 UTC Remote: 2024-11-05 18:37:34.555064873 +0000 UTC m=+91.633953874 (delta=92.056363ms)
	I1105 18:37:34.671563   44959 fix.go:200] guest clock delta is within tolerance: 92.056363ms
	I1105 18:37:34.671570   44959 start.go:83] releasing machines lock for "multinode-501442", held for 1m31.615905036s
	I1105 18:37:34.671593   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.671864   44959 main.go:141] libmachine: (multinode-501442) Calling .GetIP
	I1105 18:37:34.675007   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.675534   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.675553   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.675770   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.676353   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.676532   44959 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:37:34.676645   44959 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:37:34.676711   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:37:34.676765   44959 ssh_runner.go:195] Run: cat /version.json
	I1105 18:37:34.676792   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:37:34.679346   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.679598   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.679752   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.679789   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.679917   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:37:34.680054   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:34.680069   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.680076   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:34.680217   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:37:34.680239   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:37:34.680346   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:37:34.680344   44959 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442/id_rsa Username:docker}
	I1105 18:37:34.680455   44959 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:37:34.680559   44959 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442/id_rsa Username:docker}
	I1105 18:37:34.759511   44959 command_runner.go:130] > {"iso_version": "v1.34.0-1730282777-19883", "kicbase_version": "v0.0.45-1730110049-19872", "minikube_version": "v1.34.0", "commit": "7738213fbe7cb3f4867f3e3b534798700ea0e3fb"}
	I1105 18:37:34.787765   44959 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1105 18:37:34.788471   44959 ssh_runner.go:195] Run: systemctl --version
	I1105 18:37:34.794865   44959 command_runner.go:130] > systemd 252 (252)
	I1105 18:37:34.794904   44959 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1105 18:37:34.794978   44959 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:37:34.953687   44959 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 18:37:34.959327   44959 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1105 18:37:34.959382   44959 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:37:34.959428   44959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:37:34.968202   44959 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:37:34.968224   44959 start.go:495] detecting cgroup driver to use...
	I1105 18:37:34.968354   44959 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:37:34.983761   44959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:37:34.997177   44959 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:37:34.997245   44959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:37:35.010296   44959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:37:35.023735   44959 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:37:35.174162   44959 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:37:35.307417   44959 docker.go:233] disabling docker service ...
	I1105 18:37:35.307493   44959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:37:35.324046   44959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:37:35.337088   44959 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:37:35.481162   44959 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:37:35.625311   44959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:37:35.640563   44959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:37:35.657964   44959 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1105 18:37:35.658394   44959 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:37:35.658451   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.668360   44959 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:37:35.668440   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.678403   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.688139   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.697889   44959 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:37:35.707971   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.717816   44959 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.727750   44959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:37:35.737841   44959 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:37:35.747080   44959 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1105 18:37:35.747177   44959 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:37:35.756398   44959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:37:35.897124   44959 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:37:38.672205   44959 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.775043085s)
	I1105 18:37:38.672236   44959 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:37:38.672278   44959 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:37:38.677963   44959 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1105 18:37:38.677994   44959 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1105 18:37:38.678004   44959 command_runner.go:130] > Device: 0,22	Inode: 1301        Links: 1
	I1105 18:37:38.678012   44959 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1105 18:37:38.678017   44959 command_runner.go:130] > Access: 2024-11-05 18:37:38.582013562 +0000
	I1105 18:37:38.678023   44959 command_runner.go:130] > Modify: 2024-11-05 18:37:38.548012703 +0000
	I1105 18:37:38.678027   44959 command_runner.go:130] > Change: 2024-11-05 18:37:38.548012703 +0000
	I1105 18:37:38.678031   44959 command_runner.go:130] >  Birth: -
	I1105 18:37:38.678130   44959 start.go:563] Will wait 60s for crictl version
	I1105 18:37:38.678201   44959 ssh_runner.go:195] Run: which crictl
	I1105 18:37:38.681786   44959 command_runner.go:130] > /usr/bin/crictl
	I1105 18:37:38.681935   44959 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:37:38.715851   44959 command_runner.go:130] > Version:  0.1.0
	I1105 18:37:38.715878   44959 command_runner.go:130] > RuntimeName:  cri-o
	I1105 18:37:38.715885   44959 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1105 18:37:38.715892   44959 command_runner.go:130] > RuntimeApiVersion:  v1
	I1105 18:37:38.715912   44959 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:37:38.715988   44959 ssh_runner.go:195] Run: crio --version
	I1105 18:37:38.741581   44959 command_runner.go:130] > crio version 1.29.1
	I1105 18:37:38.741602   44959 command_runner.go:130] > Version:        1.29.1
	I1105 18:37:38.741611   44959 command_runner.go:130] > GitCommit:      unknown
	I1105 18:37:38.741618   44959 command_runner.go:130] > GitCommitDate:  unknown
	I1105 18:37:38.741624   44959 command_runner.go:130] > GitTreeState:   clean
	I1105 18:37:38.741634   44959 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1105 18:37:38.741640   44959 command_runner.go:130] > GoVersion:      go1.21.6
	I1105 18:37:38.741644   44959 command_runner.go:130] > Compiler:       gc
	I1105 18:37:38.741648   44959 command_runner.go:130] > Platform:       linux/amd64
	I1105 18:37:38.741652   44959 command_runner.go:130] > Linkmode:       dynamic
	I1105 18:37:38.741656   44959 command_runner.go:130] > BuildTags:      
	I1105 18:37:38.741660   44959 command_runner.go:130] >   containers_image_ostree_stub
	I1105 18:37:38.741665   44959 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1105 18:37:38.741668   44959 command_runner.go:130] >   btrfs_noversion
	I1105 18:37:38.741673   44959 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1105 18:37:38.741680   44959 command_runner.go:130] >   libdm_no_deferred_remove
	I1105 18:37:38.741684   44959 command_runner.go:130] >   seccomp
	I1105 18:37:38.741688   44959 command_runner.go:130] > LDFlags:          unknown
	I1105 18:37:38.741695   44959 command_runner.go:130] > SeccompEnabled:   true
	I1105 18:37:38.741714   44959 command_runner.go:130] > AppArmorEnabled:  false
	I1105 18:37:38.742880   44959 ssh_runner.go:195] Run: crio --version
	I1105 18:37:38.769501   44959 command_runner.go:130] > crio version 1.29.1
	I1105 18:37:38.769532   44959 command_runner.go:130] > Version:        1.29.1
	I1105 18:37:38.769541   44959 command_runner.go:130] > GitCommit:      unknown
	I1105 18:37:38.769547   44959 command_runner.go:130] > GitCommitDate:  unknown
	I1105 18:37:38.769553   44959 command_runner.go:130] > GitTreeState:   clean
	I1105 18:37:38.769561   44959 command_runner.go:130] > BuildDate:      2024-10-30T14:24:06Z
	I1105 18:37:38.769566   44959 command_runner.go:130] > GoVersion:      go1.21.6
	I1105 18:37:38.769570   44959 command_runner.go:130] > Compiler:       gc
	I1105 18:37:38.769574   44959 command_runner.go:130] > Platform:       linux/amd64
	I1105 18:37:38.769578   44959 command_runner.go:130] > Linkmode:       dynamic
	I1105 18:37:38.769589   44959 command_runner.go:130] > BuildTags:      
	I1105 18:37:38.769596   44959 command_runner.go:130] >   containers_image_ostree_stub
	I1105 18:37:38.769605   44959 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1105 18:37:38.769611   44959 command_runner.go:130] >   btrfs_noversion
	I1105 18:37:38.769620   44959 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1105 18:37:38.769627   44959 command_runner.go:130] >   libdm_no_deferred_remove
	I1105 18:37:38.769635   44959 command_runner.go:130] >   seccomp
	I1105 18:37:38.769640   44959 command_runner.go:130] > LDFlags:          unknown
	I1105 18:37:38.769644   44959 command_runner.go:130] > SeccompEnabled:   true
	I1105 18:37:38.769648   44959 command_runner.go:130] > AppArmorEnabled:  false
	I1105 18:37:38.772399   44959 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:37:38.773588   44959 main.go:141] libmachine: (multinode-501442) Calling .GetIP
	I1105 18:37:38.775860   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:38.776187   44959 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:37:38.776210   44959 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:37:38.776418   44959 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:37:38.780452   44959 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1105 18:37:38.780549   44959 kubeadm.go:883] updating cluster {Name:multinode-501442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-501442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:37:38.780672   44959 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:37:38.780711   44959 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:37:38.825221   44959 command_runner.go:130] > {
	I1105 18:37:38.825248   44959 command_runner.go:130] >   "images": [
	I1105 18:37:38.825252   44959 command_runner.go:130] >     {
	I1105 18:37:38.825260   44959 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1105 18:37:38.825264   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825270   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1105 18:37:38.825274   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825277   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825285   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1105 18:37:38.825291   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1105 18:37:38.825294   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825300   44959 command_runner.go:130] >       "size": "94965812",
	I1105 18:37:38.825307   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825322   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.825338   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825344   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825350   44959 command_runner.go:130] >     },
	I1105 18:37:38.825354   44959 command_runner.go:130] >     {
	I1105 18:37:38.825362   44959 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1105 18:37:38.825368   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825373   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1105 18:37:38.825379   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825383   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825398   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1105 18:37:38.825414   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1105 18:37:38.825423   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825432   44959 command_runner.go:130] >       "size": "94958644",
	I1105 18:37:38.825442   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825456   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.825464   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825468   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825476   44959 command_runner.go:130] >     },
	I1105 18:37:38.825485   44959 command_runner.go:130] >     {
	I1105 18:37:38.825497   44959 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1105 18:37:38.825506   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825514   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1105 18:37:38.825523   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825529   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825543   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1105 18:37:38.825553   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1105 18:37:38.825559   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825568   44959 command_runner.go:130] >       "size": "1363676",
	I1105 18:37:38.825578   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825588   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.825597   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825607   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825621   44959 command_runner.go:130] >     },
	I1105 18:37:38.825629   44959 command_runner.go:130] >     {
	I1105 18:37:38.825639   44959 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1105 18:37:38.825648   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825658   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1105 18:37:38.825667   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825674   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825689   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1105 18:37:38.825712   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1105 18:37:38.825720   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825725   44959 command_runner.go:130] >       "size": "31470524",
	I1105 18:37:38.825731   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825738   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.825747   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825757   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825765   44959 command_runner.go:130] >     },
	I1105 18:37:38.825773   44959 command_runner.go:130] >     {
	I1105 18:37:38.825787   44959 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1105 18:37:38.825796   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825805   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1105 18:37:38.825811   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825818   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825833   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1105 18:37:38.825848   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1105 18:37:38.825857   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825872   44959 command_runner.go:130] >       "size": "63273227",
	I1105 18:37:38.825881   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.825889   44959 command_runner.go:130] >       "username": "nonroot",
	I1105 18:37:38.825893   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.825897   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.825905   44959 command_runner.go:130] >     },
	I1105 18:37:38.825913   44959 command_runner.go:130] >     {
	I1105 18:37:38.825926   44959 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1105 18:37:38.825942   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.825953   44959 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1105 18:37:38.825962   44959 command_runner.go:130] >       ],
	I1105 18:37:38.825968   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.825976   44959 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1105 18:37:38.825988   44959 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1105 18:37:38.825997   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826004   44959 command_runner.go:130] >       "size": "149009664",
	I1105 18:37:38.826013   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826019   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.826028   44959 command_runner.go:130] >       },
	I1105 18:37:38.826036   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826044   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826053   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826060   44959 command_runner.go:130] >     },
	I1105 18:37:38.826064   44959 command_runner.go:130] >     {
	I1105 18:37:38.826071   44959 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1105 18:37:38.826080   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826089   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1105 18:37:38.826098   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826104   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826118   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1105 18:37:38.826132   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1105 18:37:38.826140   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826147   44959 command_runner.go:130] >       "size": "95274464",
	I1105 18:37:38.826156   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826163   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.826172   44959 command_runner.go:130] >       },
	I1105 18:37:38.826264   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826294   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826302   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826307   44959 command_runner.go:130] >     },
	I1105 18:37:38.826312   44959 command_runner.go:130] >     {
	I1105 18:37:38.826345   44959 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1105 18:37:38.826355   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826364   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1105 18:37:38.826372   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826377   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826400   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1105 18:37:38.826410   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1105 18:37:38.826416   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826420   44959 command_runner.go:130] >       "size": "89474374",
	I1105 18:37:38.826426   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826430   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.826443   44959 command_runner.go:130] >       },
	I1105 18:37:38.826447   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826451   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826455   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826458   44959 command_runner.go:130] >     },
	I1105 18:37:38.826461   44959 command_runner.go:130] >     {
	I1105 18:37:38.826466   44959 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1105 18:37:38.826470   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826475   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1105 18:37:38.826478   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826486   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826494   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1105 18:37:38.826503   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1105 18:37:38.826506   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826510   44959 command_runner.go:130] >       "size": "92783513",
	I1105 18:37:38.826517   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.826520   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826524   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826528   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826531   44959 command_runner.go:130] >     },
	I1105 18:37:38.826534   44959 command_runner.go:130] >     {
	I1105 18:37:38.826540   44959 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1105 18:37:38.826551   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826558   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1105 18:37:38.826562   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826566   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826577   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1105 18:37:38.826584   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1105 18:37:38.826590   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826595   44959 command_runner.go:130] >       "size": "68457798",
	I1105 18:37:38.826601   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826604   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.826608   44959 command_runner.go:130] >       },
	I1105 18:37:38.826612   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826616   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826619   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.826623   44959 command_runner.go:130] >     },
	I1105 18:37:38.826626   44959 command_runner.go:130] >     {
	I1105 18:37:38.826655   44959 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1105 18:37:38.826661   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.826666   44959 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1105 18:37:38.826672   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826676   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.826685   44959 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1105 18:37:38.826694   44959 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1105 18:37:38.826705   44959 command_runner.go:130] >       ],
	I1105 18:37:38.826711   44959 command_runner.go:130] >       "size": "742080",
	I1105 18:37:38.826715   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.826721   44959 command_runner.go:130] >         "value": "65535"
	I1105 18:37:38.826725   44959 command_runner.go:130] >       },
	I1105 18:37:38.826731   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.826735   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.826740   44959 command_runner.go:130] >       "pinned": true
	I1105 18:37:38.826744   44959 command_runner.go:130] >     }
	I1105 18:37:38.826748   44959 command_runner.go:130] >   ]
	I1105 18:37:38.826756   44959 command_runner.go:130] > }
	I1105 18:37:38.826953   44959 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:37:38.826964   44959 crio.go:433] Images already preloaded, skipping extraction
	I1105 18:37:38.827039   44959 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:37:38.856940   44959 command_runner.go:130] > {
	I1105 18:37:38.856962   44959 command_runner.go:130] >   "images": [
	I1105 18:37:38.856966   44959 command_runner.go:130] >     {
	I1105 18:37:38.856974   44959 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1105 18:37:38.856984   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.856990   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1105 18:37:38.856993   44959 command_runner.go:130] >       ],
	I1105 18:37:38.856997   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857005   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1105 18:37:38.857012   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1105 18:37:38.857015   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857020   44959 command_runner.go:130] >       "size": "94965812",
	I1105 18:37:38.857023   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857027   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857031   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857035   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857038   44959 command_runner.go:130] >     },
	I1105 18:37:38.857041   44959 command_runner.go:130] >     {
	I1105 18:37:38.857047   44959 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1105 18:37:38.857050   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857056   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1105 18:37:38.857062   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857066   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857073   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1105 18:37:38.857079   44959 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1105 18:37:38.857086   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857090   44959 command_runner.go:130] >       "size": "94958644",
	I1105 18:37:38.857093   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857101   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857106   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857111   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857115   44959 command_runner.go:130] >     },
	I1105 18:37:38.857119   44959 command_runner.go:130] >     {
	I1105 18:37:38.857124   44959 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1105 18:37:38.857130   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857136   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1105 18:37:38.857140   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857148   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857157   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1105 18:37:38.857164   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1105 18:37:38.857170   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857174   44959 command_runner.go:130] >       "size": "1363676",
	I1105 18:37:38.857179   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857184   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857191   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857195   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857199   44959 command_runner.go:130] >     },
	I1105 18:37:38.857202   44959 command_runner.go:130] >     {
	I1105 18:37:38.857208   44959 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1105 18:37:38.857214   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857220   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1105 18:37:38.857225   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857229   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857236   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1105 18:37:38.857251   44959 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1105 18:37:38.857257   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857261   44959 command_runner.go:130] >       "size": "31470524",
	I1105 18:37:38.857268   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857272   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857275   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857279   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857283   44959 command_runner.go:130] >     },
	I1105 18:37:38.857286   44959 command_runner.go:130] >     {
	I1105 18:37:38.857292   44959 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1105 18:37:38.857297   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857301   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1105 18:37:38.857308   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857311   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857320   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1105 18:37:38.857333   44959 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1105 18:37:38.857342   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857349   44959 command_runner.go:130] >       "size": "63273227",
	I1105 18:37:38.857353   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857359   44959 command_runner.go:130] >       "username": "nonroot",
	I1105 18:37:38.857363   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857369   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857372   44959 command_runner.go:130] >     },
	I1105 18:37:38.857376   44959 command_runner.go:130] >     {
	I1105 18:37:38.857382   44959 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1105 18:37:38.857388   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857393   44959 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1105 18:37:38.857399   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857402   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857409   44959 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1105 18:37:38.857418   44959 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1105 18:37:38.857422   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857426   44959 command_runner.go:130] >       "size": "149009664",
	I1105 18:37:38.857433   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857436   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.857442   44959 command_runner.go:130] >       },
	I1105 18:37:38.857448   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857451   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857455   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857459   44959 command_runner.go:130] >     },
	I1105 18:37:38.857461   44959 command_runner.go:130] >     {
	I1105 18:37:38.857467   44959 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1105 18:37:38.857473   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857478   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1105 18:37:38.857484   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857488   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857495   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1105 18:37:38.857509   44959 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1105 18:37:38.857514   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857523   44959 command_runner.go:130] >       "size": "95274464",
	I1105 18:37:38.857529   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857533   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.857536   44959 command_runner.go:130] >       },
	I1105 18:37:38.857545   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857551   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857555   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857558   44959 command_runner.go:130] >     },
	I1105 18:37:38.857568   44959 command_runner.go:130] >     {
	I1105 18:37:38.857577   44959 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1105 18:37:38.857581   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857588   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1105 18:37:38.857591   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857595   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857613   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1105 18:37:38.857624   44959 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1105 18:37:38.857627   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857654   44959 command_runner.go:130] >       "size": "89474374",
	I1105 18:37:38.857661   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857665   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.857668   44959 command_runner.go:130] >       },
	I1105 18:37:38.857672   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857675   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857679   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857682   44959 command_runner.go:130] >     },
	I1105 18:37:38.857685   44959 command_runner.go:130] >     {
	I1105 18:37:38.857691   44959 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1105 18:37:38.857697   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857702   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1105 18:37:38.857707   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857711   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857718   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1105 18:37:38.857729   44959 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1105 18:37:38.857737   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857744   44959 command_runner.go:130] >       "size": "92783513",
	I1105 18:37:38.857747   44959 command_runner.go:130] >       "uid": null,
	I1105 18:37:38.857751   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857754   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857758   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857761   44959 command_runner.go:130] >     },
	I1105 18:37:38.857765   44959 command_runner.go:130] >     {
	I1105 18:37:38.857773   44959 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1105 18:37:38.857777   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857784   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1105 18:37:38.857787   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857791   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857798   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1105 18:37:38.857807   44959 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1105 18:37:38.857810   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857814   44959 command_runner.go:130] >       "size": "68457798",
	I1105 18:37:38.857818   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857821   44959 command_runner.go:130] >         "value": "0"
	I1105 18:37:38.857825   44959 command_runner.go:130] >       },
	I1105 18:37:38.857829   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857832   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857836   44959 command_runner.go:130] >       "pinned": false
	I1105 18:37:38.857839   44959 command_runner.go:130] >     },
	I1105 18:37:38.857843   44959 command_runner.go:130] >     {
	I1105 18:37:38.857849   44959 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1105 18:37:38.857853   44959 command_runner.go:130] >       "repoTags": [
	I1105 18:37:38.857858   44959 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1105 18:37:38.857863   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857867   44959 command_runner.go:130] >       "repoDigests": [
	I1105 18:37:38.857873   44959 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1105 18:37:38.857882   44959 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1105 18:37:38.857886   44959 command_runner.go:130] >       ],
	I1105 18:37:38.857898   44959 command_runner.go:130] >       "size": "742080",
	I1105 18:37:38.857905   44959 command_runner.go:130] >       "uid": {
	I1105 18:37:38.857909   44959 command_runner.go:130] >         "value": "65535"
	I1105 18:37:38.857912   44959 command_runner.go:130] >       },
	I1105 18:37:38.857915   44959 command_runner.go:130] >       "username": "",
	I1105 18:37:38.857919   44959 command_runner.go:130] >       "spec": null,
	I1105 18:37:38.857923   44959 command_runner.go:130] >       "pinned": true
	I1105 18:37:38.857926   44959 command_runner.go:130] >     }
	I1105 18:37:38.857929   44959 command_runner.go:130] >   ]
	I1105 18:37:38.857932   44959 command_runner.go:130] > }
	I1105 18:37:38.858401   44959 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:37:38.858422   44959 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:37:38.858433   44959 kubeadm.go:934] updating node { 192.168.39.235 8443 v1.31.2 crio true true} ...
	I1105 18:37:38.858567   44959 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-501442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-501442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:37:38.858649   44959 ssh_runner.go:195] Run: crio config
	I1105 18:37:38.890961   44959 command_runner.go:130] ! time="2024-11-05 18:37:38.866378564Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1105 18:37:38.898167   44959 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1105 18:37:38.908853   44959 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1105 18:37:38.908878   44959 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1105 18:37:38.908887   44959 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1105 18:37:38.908892   44959 command_runner.go:130] > #
	I1105 18:37:38.908901   44959 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1105 18:37:38.908910   44959 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1105 18:37:38.908920   44959 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1105 18:37:38.908931   44959 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1105 18:37:38.908937   44959 command_runner.go:130] > # reload'.
	I1105 18:37:38.908948   44959 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1105 18:37:38.908961   44959 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1105 18:37:38.908972   44959 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1105 18:37:38.908985   44959 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1105 18:37:38.908992   44959 command_runner.go:130] > [crio]
	I1105 18:37:38.909003   44959 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1105 18:37:38.909014   44959 command_runner.go:130] > # containers images, in this directory.
	I1105 18:37:38.909022   44959 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1105 18:37:38.909046   44959 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1105 18:37:38.909062   44959 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1105 18:37:38.909074   44959 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1105 18:37:38.909082   44959 command_runner.go:130] > # imagestore = ""
	I1105 18:37:38.909093   44959 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1105 18:37:38.909106   44959 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1105 18:37:38.909115   44959 command_runner.go:130] > storage_driver = "overlay"
	I1105 18:37:38.909126   44959 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1105 18:37:38.909140   44959 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1105 18:37:38.909150   44959 command_runner.go:130] > storage_option = [
	I1105 18:37:38.909181   44959 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1105 18:37:38.909194   44959 command_runner.go:130] > ]
	I1105 18:37:38.909215   44959 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1105 18:37:38.909229   44959 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1105 18:37:38.909240   44959 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1105 18:37:38.909251   44959 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1105 18:37:38.909264   44959 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1105 18:37:38.909275   44959 command_runner.go:130] > # always happen on a node reboot
	I1105 18:37:38.909285   44959 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1105 18:37:38.909302   44959 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1105 18:37:38.909317   44959 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1105 18:37:38.909327   44959 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1105 18:37:38.909339   44959 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1105 18:37:38.909355   44959 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1105 18:37:38.909371   44959 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1105 18:37:38.909382   44959 command_runner.go:130] > # internal_wipe = true
	I1105 18:37:38.909398   44959 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1105 18:37:38.909410   44959 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1105 18:37:38.909420   44959 command_runner.go:130] > # internal_repair = false
	I1105 18:37:38.909432   44959 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1105 18:37:38.909446   44959 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1105 18:37:38.909458   44959 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1105 18:37:38.909470   44959 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1105 18:37:38.909483   44959 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1105 18:37:38.909491   44959 command_runner.go:130] > [crio.api]
	I1105 18:37:38.909500   44959 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1105 18:37:38.909511   44959 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1105 18:37:38.909521   44959 command_runner.go:130] > # IP address on which the stream server will listen.
	I1105 18:37:38.909532   44959 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1105 18:37:38.909544   44959 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1105 18:37:38.909556   44959 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1105 18:37:38.909565   44959 command_runner.go:130] > # stream_port = "0"
	I1105 18:37:38.909576   44959 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1105 18:37:38.909587   44959 command_runner.go:130] > # stream_enable_tls = false
	I1105 18:37:38.909600   44959 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1105 18:37:38.909613   44959 command_runner.go:130] > # stream_idle_timeout = ""
	I1105 18:37:38.909630   44959 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1105 18:37:38.909643   44959 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1105 18:37:38.909652   44959 command_runner.go:130] > # minutes.
	I1105 18:37:38.909659   44959 command_runner.go:130] > # stream_tls_cert = ""
	I1105 18:37:38.909676   44959 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1105 18:37:38.909688   44959 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1105 18:37:38.909696   44959 command_runner.go:130] > # stream_tls_key = ""
	I1105 18:37:38.909716   44959 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1105 18:37:38.909729   44959 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1105 18:37:38.909758   44959 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1105 18:37:38.909767   44959 command_runner.go:130] > # stream_tls_ca = ""
	I1105 18:37:38.909780   44959 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1105 18:37:38.909790   44959 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1105 18:37:38.909805   44959 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1105 18:37:38.909816   44959 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1105 18:37:38.909829   44959 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1105 18:37:38.909842   44959 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1105 18:37:38.909850   44959 command_runner.go:130] > [crio.runtime]
	I1105 18:37:38.909860   44959 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1105 18:37:38.909873   44959 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1105 18:37:38.909883   44959 command_runner.go:130] > # "nofile=1024:2048"
	I1105 18:37:38.909895   44959 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1105 18:37:38.909904   44959 command_runner.go:130] > # default_ulimits = [
	I1105 18:37:38.909911   44959 command_runner.go:130] > # ]
	I1105 18:37:38.909923   44959 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1105 18:37:38.909932   44959 command_runner.go:130] > # no_pivot = false
	I1105 18:37:38.909943   44959 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1105 18:37:38.909956   44959 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1105 18:37:38.909967   44959 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1105 18:37:38.909978   44959 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1105 18:37:38.909989   44959 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1105 18:37:38.910001   44959 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1105 18:37:38.910019   44959 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1105 18:37:38.910030   44959 command_runner.go:130] > # Cgroup setting for conmon
	I1105 18:37:38.910044   44959 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1105 18:37:38.910054   44959 command_runner.go:130] > conmon_cgroup = "pod"
	I1105 18:37:38.910067   44959 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1105 18:37:38.910078   44959 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1105 18:37:38.910099   44959 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1105 18:37:38.910108   44959 command_runner.go:130] > conmon_env = [
	I1105 18:37:38.910119   44959 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1105 18:37:38.910126   44959 command_runner.go:130] > ]
	I1105 18:37:38.910136   44959 command_runner.go:130] > # Additional environment variables to set for all the
	I1105 18:37:38.910148   44959 command_runner.go:130] > # containers. These are overridden if set in the
	I1105 18:37:38.910161   44959 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1105 18:37:38.910170   44959 command_runner.go:130] > # default_env = [
	I1105 18:37:38.910176   44959 command_runner.go:130] > # ]
	I1105 18:37:38.910189   44959 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1105 18:37:38.910204   44959 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1105 18:37:38.910215   44959 command_runner.go:130] > # selinux = false
	I1105 18:37:38.910229   44959 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1105 18:37:38.910253   44959 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1105 18:37:38.910266   44959 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1105 18:37:38.910276   44959 command_runner.go:130] > # seccomp_profile = ""
	I1105 18:37:38.910296   44959 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1105 18:37:38.910309   44959 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1105 18:37:38.910320   44959 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1105 18:37:38.910330   44959 command_runner.go:130] > # which might increase security.
	I1105 18:37:38.910339   44959 command_runner.go:130] > # This option is currently deprecated,
	I1105 18:37:38.910352   44959 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1105 18:37:38.910363   44959 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1105 18:37:38.910375   44959 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1105 18:37:38.910388   44959 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1105 18:37:38.910401   44959 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1105 18:37:38.910412   44959 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1105 18:37:38.910430   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.910440   44959 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1105 18:37:38.910453   44959 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1105 18:37:38.910464   44959 command_runner.go:130] > # the cgroup blockio controller.
	I1105 18:37:38.910472   44959 command_runner.go:130] > # blockio_config_file = ""
	I1105 18:37:38.910484   44959 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1105 18:37:38.910493   44959 command_runner.go:130] > # blockio parameters.
	I1105 18:37:38.910502   44959 command_runner.go:130] > # blockio_reload = false
	I1105 18:37:38.910516   44959 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1105 18:37:38.910525   44959 command_runner.go:130] > # irqbalance daemon.
	I1105 18:37:38.910537   44959 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1105 18:37:38.910553   44959 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1105 18:37:38.910567   44959 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1105 18:37:38.910582   44959 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1105 18:37:38.910595   44959 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1105 18:37:38.910610   44959 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1105 18:37:38.910622   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.910633   44959 command_runner.go:130] > # rdt_config_file = ""
	I1105 18:37:38.910645   44959 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1105 18:37:38.910654   44959 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1105 18:37:38.910695   44959 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1105 18:37:38.910710   44959 command_runner.go:130] > # separate_pull_cgroup = ""
	I1105 18:37:38.910724   44959 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1105 18:37:38.910736   44959 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1105 18:37:38.910745   44959 command_runner.go:130] > # will be added.
	I1105 18:37:38.910753   44959 command_runner.go:130] > # default_capabilities = [
	I1105 18:37:38.910762   44959 command_runner.go:130] > # 	"CHOWN",
	I1105 18:37:38.910770   44959 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1105 18:37:38.910778   44959 command_runner.go:130] > # 	"FSETID",
	I1105 18:37:38.910785   44959 command_runner.go:130] > # 	"FOWNER",
	I1105 18:37:38.910791   44959 command_runner.go:130] > # 	"SETGID",
	I1105 18:37:38.910799   44959 command_runner.go:130] > # 	"SETUID",
	I1105 18:37:38.910808   44959 command_runner.go:130] > # 	"SETPCAP",
	I1105 18:37:38.910822   44959 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1105 18:37:38.910831   44959 command_runner.go:130] > # 	"KILL",
	I1105 18:37:38.910837   44959 command_runner.go:130] > # ]
	I1105 18:37:38.910850   44959 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1105 18:37:38.910863   44959 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1105 18:37:38.910874   44959 command_runner.go:130] > # add_inheritable_capabilities = false
	I1105 18:37:38.910886   44959 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1105 18:37:38.910899   44959 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1105 18:37:38.910909   44959 command_runner.go:130] > default_sysctls = [
	I1105 18:37:38.910919   44959 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1105 18:37:38.910927   44959 command_runner.go:130] > ]
	I1105 18:37:38.910935   44959 command_runner.go:130] > # List of devices on the host that a
	I1105 18:37:38.910949   44959 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1105 18:37:38.910957   44959 command_runner.go:130] > # allowed_devices = [
	I1105 18:37:38.910964   44959 command_runner.go:130] > # 	"/dev/fuse",
	I1105 18:37:38.910986   44959 command_runner.go:130] > # ]
	I1105 18:37:38.910997   44959 command_runner.go:130] > # List of additional devices. specified as
	I1105 18:37:38.911012   44959 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1105 18:37:38.911023   44959 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1105 18:37:38.911038   44959 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1105 18:37:38.911047   44959 command_runner.go:130] > # additional_devices = [
	I1105 18:37:38.911053   44959 command_runner.go:130] > # ]
	I1105 18:37:38.911063   44959 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1105 18:37:38.911076   44959 command_runner.go:130] > # cdi_spec_dirs = [
	I1105 18:37:38.911084   44959 command_runner.go:130] > # 	"/etc/cdi",
	I1105 18:37:38.911092   44959 command_runner.go:130] > # 	"/var/run/cdi",
	I1105 18:37:38.911099   44959 command_runner.go:130] > # ]
	I1105 18:37:38.911111   44959 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1105 18:37:38.911124   44959 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1105 18:37:38.911133   44959 command_runner.go:130] > # Defaults to false.
	I1105 18:37:38.911142   44959 command_runner.go:130] > # device_ownership_from_security_context = false
	I1105 18:37:38.911153   44959 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1105 18:37:38.911166   44959 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1105 18:37:38.911184   44959 command_runner.go:130] > # hooks_dir = [
	I1105 18:37:38.911195   44959 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1105 18:37:38.911201   44959 command_runner.go:130] > # ]
	I1105 18:37:38.911213   44959 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1105 18:37:38.911226   44959 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1105 18:37:38.911235   44959 command_runner.go:130] > # its default mounts from the following two files:
	I1105 18:37:38.911243   44959 command_runner.go:130] > #
	I1105 18:37:38.911254   44959 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1105 18:37:38.911267   44959 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1105 18:37:38.911279   44959 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1105 18:37:38.911287   44959 command_runner.go:130] > #
	I1105 18:37:38.911298   44959 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1105 18:37:38.911311   44959 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1105 18:37:38.911324   44959 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1105 18:37:38.911335   44959 command_runner.go:130] > #      only add mounts it finds in this file.
	I1105 18:37:38.911343   44959 command_runner.go:130] > #
	I1105 18:37:38.911351   44959 command_runner.go:130] > # default_mounts_file = ""
	I1105 18:37:38.911363   44959 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1105 18:37:38.911377   44959 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1105 18:37:38.911387   44959 command_runner.go:130] > pids_limit = 1024
	I1105 18:37:38.911400   44959 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1105 18:37:38.911413   44959 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1105 18:37:38.911427   44959 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1105 18:37:38.911442   44959 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1105 18:37:38.911452   44959 command_runner.go:130] > # log_size_max = -1
	I1105 18:37:38.911467   44959 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1105 18:37:38.911476   44959 command_runner.go:130] > # log_to_journald = false
	I1105 18:37:38.911487   44959 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1105 18:37:38.911498   44959 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1105 18:37:38.911510   44959 command_runner.go:130] > # Path to directory for container attach sockets.
	I1105 18:37:38.911521   44959 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1105 18:37:38.911532   44959 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1105 18:37:38.911542   44959 command_runner.go:130] > # bind_mount_prefix = ""
	I1105 18:37:38.911562   44959 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1105 18:37:38.911572   44959 command_runner.go:130] > # read_only = false
	I1105 18:37:38.911582   44959 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1105 18:37:38.911595   44959 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1105 18:37:38.911605   44959 command_runner.go:130] > # live configuration reload.
	I1105 18:37:38.911615   44959 command_runner.go:130] > # log_level = "info"
	I1105 18:37:38.911625   44959 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1105 18:37:38.911636   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.911643   44959 command_runner.go:130] > # log_filter = ""
	I1105 18:37:38.911656   44959 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1105 18:37:38.911671   44959 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1105 18:37:38.911681   44959 command_runner.go:130] > # separated by comma.
	I1105 18:37:38.911696   44959 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1105 18:37:38.911711   44959 command_runner.go:130] > # uid_mappings = ""
	I1105 18:37:38.911724   44959 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1105 18:37:38.911741   44959 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1105 18:37:38.911751   44959 command_runner.go:130] > # separated by comma.
	I1105 18:37:38.911766   44959 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1105 18:37:38.911776   44959 command_runner.go:130] > # gid_mappings = ""
	I1105 18:37:38.911790   44959 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1105 18:37:38.911802   44959 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1105 18:37:38.911816   44959 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1105 18:37:38.911831   44959 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1105 18:37:38.911841   44959 command_runner.go:130] > # minimum_mappable_uid = -1
	I1105 18:37:38.911853   44959 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1105 18:37:38.911865   44959 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1105 18:37:38.911878   44959 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1105 18:37:38.911895   44959 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1105 18:37:38.911910   44959 command_runner.go:130] > # minimum_mappable_gid = -1
	I1105 18:37:38.911924   44959 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1105 18:37:38.911937   44959 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1105 18:37:38.911950   44959 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1105 18:37:38.911959   44959 command_runner.go:130] > # ctr_stop_timeout = 30
	I1105 18:37:38.911975   44959 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1105 18:37:38.911987   44959 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1105 18:37:38.911997   44959 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1105 18:37:38.912008   44959 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1105 18:37:38.912017   44959 command_runner.go:130] > drop_infra_ctr = false
	I1105 18:37:38.912029   44959 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1105 18:37:38.912042   44959 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1105 18:37:38.912057   44959 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1105 18:37:38.912066   44959 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1105 18:37:38.912080   44959 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1105 18:37:38.912093   44959 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1105 18:37:38.912105   44959 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1105 18:37:38.912117   44959 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1105 18:37:38.912126   44959 command_runner.go:130] > # shared_cpuset = ""
	I1105 18:37:38.912137   44959 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1105 18:37:38.912148   44959 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1105 18:37:38.912156   44959 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1105 18:37:38.912170   44959 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1105 18:37:38.912180   44959 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1105 18:37:38.912193   44959 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1105 18:37:38.912207   44959 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1105 18:37:38.912217   44959 command_runner.go:130] > # enable_criu_support = false
	I1105 18:37:38.912228   44959 command_runner.go:130] > # Enable/disable the generation of the container,
	I1105 18:37:38.912241   44959 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1105 18:37:38.912252   44959 command_runner.go:130] > # enable_pod_events = false
	I1105 18:37:38.912265   44959 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1105 18:37:38.912279   44959 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1105 18:37:38.912290   44959 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1105 18:37:38.912300   44959 command_runner.go:130] > # default_runtime = "runc"
	I1105 18:37:38.912310   44959 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1105 18:37:38.912323   44959 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1105 18:37:38.912342   44959 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1105 18:37:38.912356   44959 command_runner.go:130] > # creation as a file is not desired either.
	I1105 18:37:38.912378   44959 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1105 18:37:38.912390   44959 command_runner.go:130] > # the hostname is being managed dynamically.
	I1105 18:37:38.912399   44959 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1105 18:37:38.912407   44959 command_runner.go:130] > # ]
	I1105 18:37:38.912419   44959 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1105 18:37:38.912432   44959 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1105 18:37:38.912445   44959 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1105 18:37:38.912457   44959 command_runner.go:130] > # Each entry in the table should follow the format:
	I1105 18:37:38.912464   44959 command_runner.go:130] > #
	I1105 18:37:38.912473   44959 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1105 18:37:38.912483   44959 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1105 18:37:38.912998   44959 command_runner.go:130] > # runtime_type = "oci"
	I1105 18:37:38.913025   44959 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1105 18:37:38.913035   44959 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1105 18:37:38.913042   44959 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1105 18:37:38.913056   44959 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1105 18:37:38.913062   44959 command_runner.go:130] > # monitor_env = []
	I1105 18:37:38.913069   44959 command_runner.go:130] > # privileged_without_host_devices = false
	I1105 18:37:38.913075   44959 command_runner.go:130] > # allowed_annotations = []
	I1105 18:37:38.913088   44959 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1105 18:37:38.913098   44959 command_runner.go:130] > # Where:
	I1105 18:37:38.913107   44959 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1105 18:37:38.913117   44959 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1105 18:37:38.913132   44959 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1105 18:37:38.913141   44959 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1105 18:37:38.913147   44959 command_runner.go:130] > #   in $PATH.
	I1105 18:37:38.913162   44959 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1105 18:37:38.913170   44959 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1105 18:37:38.913179   44959 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1105 18:37:38.913185   44959 command_runner.go:130] > #   state.
	I1105 18:37:38.913200   44959 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1105 18:37:38.913209   44959 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1105 18:37:38.913223   44959 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1105 18:37:38.913238   44959 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1105 18:37:38.913247   44959 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1105 18:37:38.913262   44959 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1105 18:37:38.913273   44959 command_runner.go:130] > #   The currently recognized values are:
	I1105 18:37:38.913282   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1105 18:37:38.913299   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1105 18:37:38.913308   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1105 18:37:38.913322   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1105 18:37:38.913334   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1105 18:37:38.913343   44959 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1105 18:37:38.913358   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1105 18:37:38.913368   44959 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1105 18:37:38.913383   44959 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1105 18:37:38.913392   44959 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1105 18:37:38.913401   44959 command_runner.go:130] > #   deprecated option "conmon".
	I1105 18:37:38.913418   44959 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1105 18:37:38.913426   44959 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1105 18:37:38.913436   44959 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1105 18:37:38.913448   44959 command_runner.go:130] > #   should be moved to the container's cgroup
	I1105 18:37:38.913458   44959 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1105 18:37:38.913467   44959 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1105 18:37:38.913482   44959 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1105 18:37:38.913490   44959 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1105 18:37:38.913496   44959 command_runner.go:130] > #
	I1105 18:37:38.913503   44959 command_runner.go:130] > # Using the seccomp notifier feature:
	I1105 18:37:38.913513   44959 command_runner.go:130] > #
	I1105 18:37:38.913522   44959 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1105 18:37:38.913532   44959 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1105 18:37:38.913537   44959 command_runner.go:130] > #
	I1105 18:37:38.913551   44959 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1105 18:37:38.913561   44959 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1105 18:37:38.913565   44959 command_runner.go:130] > #
	I1105 18:37:38.913579   44959 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1105 18:37:38.913591   44959 command_runner.go:130] > # feature.
	I1105 18:37:38.913595   44959 command_runner.go:130] > #
	I1105 18:37:38.913604   44959 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1105 18:37:38.913618   44959 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1105 18:37:38.913627   44959 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1105 18:37:38.913642   44959 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1105 18:37:38.913656   44959 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1105 18:37:38.913667   44959 command_runner.go:130] > #
	I1105 18:37:38.913677   44959 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1105 18:37:38.913691   44959 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1105 18:37:38.913701   44959 command_runner.go:130] > #
	I1105 18:37:38.913719   44959 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1105 18:37:38.913734   44959 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1105 18:37:38.913738   44959 command_runner.go:130] > #
	I1105 18:37:38.913748   44959 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1105 18:37:38.913757   44959 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1105 18:37:38.913762   44959 command_runner.go:130] > # limitation.
	I1105 18:37:38.913777   44959 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1105 18:37:38.913784   44959 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1105 18:37:38.913792   44959 command_runner.go:130] > runtime_type = "oci"
	I1105 18:37:38.913821   44959 command_runner.go:130] > runtime_root = "/run/runc"
	I1105 18:37:38.913857   44959 command_runner.go:130] > runtime_config_path = ""
	I1105 18:37:38.913870   44959 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1105 18:37:38.913877   44959 command_runner.go:130] > monitor_cgroup = "pod"
	I1105 18:37:38.913884   44959 command_runner.go:130] > monitor_exec_cgroup = ""
	I1105 18:37:38.913896   44959 command_runner.go:130] > monitor_env = [
	I1105 18:37:38.913908   44959 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1105 18:37:38.913912   44959 command_runner.go:130] > ]
	I1105 18:37:38.913919   44959 command_runner.go:130] > privileged_without_host_devices = false
	I1105 18:37:38.913937   44959 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1105 18:37:38.913950   44959 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1105 18:37:38.913966   44959 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1105 18:37:38.914009   44959 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1105 18:37:38.914017   44959 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1105 18:37:38.914026   44959 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1105 18:37:38.914037   44959 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1105 18:37:38.914073   44959 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1105 18:37:38.914080   44959 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1105 18:37:38.914358   44959 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1105 18:37:38.914378   44959 command_runner.go:130] > # Example:
	I1105 18:37:38.914387   44959 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1105 18:37:38.914395   44959 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1105 18:37:38.914410   44959 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1105 18:37:38.914421   44959 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1105 18:37:38.914429   44959 command_runner.go:130] > # cpuset = 0
	I1105 18:37:38.914537   44959 command_runner.go:130] > # cpushares = "0-1"
	I1105 18:37:38.914559   44959 command_runner.go:130] > # Where:
	I1105 18:37:38.914567   44959 command_runner.go:130] > # The workload name is workload-type.
	I1105 18:37:38.914578   44959 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1105 18:37:38.914590   44959 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1105 18:37:38.914603   44959 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1105 18:37:38.914619   44959 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1105 18:37:38.914633   44959 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1105 18:37:38.914644   44959 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1105 18:37:38.914658   44959 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1105 18:37:38.914669   44959 command_runner.go:130] > # Default value is set to true
	I1105 18:37:38.914677   44959 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1105 18:37:38.914688   44959 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1105 18:37:38.914696   44959 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1105 18:37:38.914701   44959 command_runner.go:130] > # Default value is set to 'false'
	I1105 18:37:38.914707   44959 command_runner.go:130] > # disable_hostport_mapping = false
	I1105 18:37:38.914714   44959 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1105 18:37:38.914719   44959 command_runner.go:130] > #
	I1105 18:37:38.914725   44959 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1105 18:37:38.914733   44959 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1105 18:37:38.914742   44959 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1105 18:37:38.914750   44959 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1105 18:37:38.914758   44959 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1105 18:37:38.914762   44959 command_runner.go:130] > [crio.image]
	I1105 18:37:38.914771   44959 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1105 18:37:38.914778   44959 command_runner.go:130] > # default_transport = "docker://"
	I1105 18:37:38.914784   44959 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1105 18:37:38.914793   44959 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1105 18:37:38.914799   44959 command_runner.go:130] > # global_auth_file = ""
	I1105 18:37:38.914805   44959 command_runner.go:130] > # The image used to instantiate infra containers.
	I1105 18:37:38.914812   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.914816   44959 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1105 18:37:38.914825   44959 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1105 18:37:38.914833   44959 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1105 18:37:38.914839   44959 command_runner.go:130] > # This option supports live configuration reload.
	I1105 18:37:38.914852   44959 command_runner.go:130] > # pause_image_auth_file = ""
	I1105 18:37:38.914860   44959 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1105 18:37:38.914866   44959 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1105 18:37:38.914874   44959 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1105 18:37:38.914882   44959 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1105 18:37:38.914886   44959 command_runner.go:130] > # pause_command = "/pause"
	I1105 18:37:38.914902   44959 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1105 18:37:38.914910   44959 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1105 18:37:38.914918   44959 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1105 18:37:38.914931   44959 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1105 18:37:38.914939   44959 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1105 18:37:38.914951   44959 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1105 18:37:38.914960   44959 command_runner.go:130] > # pinned_images = [
	I1105 18:37:38.914964   44959 command_runner.go:130] > # ]
	I1105 18:37:38.914988   44959 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1105 18:37:38.915002   44959 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1105 18:37:38.915015   44959 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1105 18:37:38.915029   44959 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1105 18:37:38.915039   44959 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1105 18:37:38.915045   44959 command_runner.go:130] > # signature_policy = ""
	I1105 18:37:38.915054   44959 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1105 18:37:38.915067   44959 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1105 18:37:38.915077   44959 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1105 18:37:38.915083   44959 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1105 18:37:38.915099   44959 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1105 18:37:38.915106   44959 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1105 18:37:38.915112   44959 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1105 18:37:38.915121   44959 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1105 18:37:38.915127   44959 command_runner.go:130] > # changing them here.
	I1105 18:37:38.915131   44959 command_runner.go:130] > # insecure_registries = [
	I1105 18:37:38.915136   44959 command_runner.go:130] > # ]
	I1105 18:37:38.915142   44959 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1105 18:37:38.915150   44959 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1105 18:37:38.915157   44959 command_runner.go:130] > # image_volumes = "mkdir"
	I1105 18:37:38.915161   44959 command_runner.go:130] > # Temporary directory to use for storing big files
	I1105 18:37:38.915168   44959 command_runner.go:130] > # big_files_temporary_dir = ""
	I1105 18:37:38.915176   44959 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1105 18:37:38.915182   44959 command_runner.go:130] > # CNI plugins.
	I1105 18:37:38.915186   44959 command_runner.go:130] > [crio.network]
	I1105 18:37:38.915194   44959 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1105 18:37:38.915202   44959 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1105 18:37:38.915206   44959 command_runner.go:130] > # cni_default_network = ""
	I1105 18:37:38.915214   44959 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1105 18:37:38.915220   44959 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1105 18:37:38.915225   44959 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1105 18:37:38.915231   44959 command_runner.go:130] > # plugin_dirs = [
	I1105 18:37:38.915235   44959 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1105 18:37:38.915241   44959 command_runner.go:130] > # ]
	I1105 18:37:38.915246   44959 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1105 18:37:38.915252   44959 command_runner.go:130] > [crio.metrics]
	I1105 18:37:38.915257   44959 command_runner.go:130] > # Globally enable or disable metrics support.
	I1105 18:37:38.915260   44959 command_runner.go:130] > enable_metrics = true
	I1105 18:37:38.915266   44959 command_runner.go:130] > # Specify enabled metrics collectors.
	I1105 18:37:38.915273   44959 command_runner.go:130] > # Per default all metrics are enabled.
	I1105 18:37:38.915279   44959 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1105 18:37:38.915288   44959 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1105 18:37:38.915296   44959 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1105 18:37:38.915302   44959 command_runner.go:130] > # metrics_collectors = [
	I1105 18:37:38.915306   44959 command_runner.go:130] > # 	"operations",
	I1105 18:37:38.915312   44959 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1105 18:37:38.915317   44959 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1105 18:37:38.915321   44959 command_runner.go:130] > # 	"operations_errors",
	I1105 18:37:38.915326   44959 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1105 18:37:38.915330   44959 command_runner.go:130] > # 	"image_pulls_by_name",
	I1105 18:37:38.915336   44959 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1105 18:37:38.915340   44959 command_runner.go:130] > # 	"image_pulls_failures",
	I1105 18:37:38.915344   44959 command_runner.go:130] > # 	"image_pulls_successes",
	I1105 18:37:38.915351   44959 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1105 18:37:38.915355   44959 command_runner.go:130] > # 	"image_layer_reuse",
	I1105 18:37:38.915362   44959 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1105 18:37:38.915366   44959 command_runner.go:130] > # 	"containers_oom_total",
	I1105 18:37:38.915372   44959 command_runner.go:130] > # 	"containers_oom",
	I1105 18:37:38.915376   44959 command_runner.go:130] > # 	"processes_defunct",
	I1105 18:37:38.915382   44959 command_runner.go:130] > # 	"operations_total",
	I1105 18:37:38.915386   44959 command_runner.go:130] > # 	"operations_latency_seconds",
	I1105 18:37:38.915393   44959 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1105 18:37:38.915397   44959 command_runner.go:130] > # 	"operations_errors_total",
	I1105 18:37:38.915403   44959 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1105 18:37:38.915408   44959 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1105 18:37:38.915414   44959 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1105 18:37:38.915418   44959 command_runner.go:130] > # 	"image_pulls_success_total",
	I1105 18:37:38.915430   44959 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1105 18:37:38.915437   44959 command_runner.go:130] > # 	"containers_oom_count_total",
	I1105 18:37:38.915441   44959 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1105 18:37:38.915448   44959 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1105 18:37:38.915451   44959 command_runner.go:130] > # ]
	I1105 18:37:38.915460   44959 command_runner.go:130] > # The port on which the metrics server will listen.
	I1105 18:37:38.915466   44959 command_runner.go:130] > # metrics_port = 9090
	I1105 18:37:38.915471   44959 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1105 18:37:38.915477   44959 command_runner.go:130] > # metrics_socket = ""
	I1105 18:37:38.915482   44959 command_runner.go:130] > # The certificate for the secure metrics server.
	I1105 18:37:38.915490   44959 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1105 18:37:38.915499   44959 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1105 18:37:38.915504   44959 command_runner.go:130] > # certificate on any modification event.
	I1105 18:37:38.915510   44959 command_runner.go:130] > # metrics_cert = ""
	I1105 18:37:38.915515   44959 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1105 18:37:38.915522   44959 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1105 18:37:38.915526   44959 command_runner.go:130] > # metrics_key = ""
	I1105 18:37:38.915534   44959 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1105 18:37:38.915541   44959 command_runner.go:130] > [crio.tracing]
	I1105 18:37:38.915546   44959 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1105 18:37:38.915552   44959 command_runner.go:130] > # enable_tracing = false
	I1105 18:37:38.915558   44959 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1105 18:37:38.915565   44959 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1105 18:37:38.915573   44959 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1105 18:37:38.915584   44959 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1105 18:37:38.915591   44959 command_runner.go:130] > # CRI-O NRI configuration.
	I1105 18:37:38.915595   44959 command_runner.go:130] > [crio.nri]
	I1105 18:37:38.915599   44959 command_runner.go:130] > # Globally enable or disable NRI.
	I1105 18:37:38.915605   44959 command_runner.go:130] > # enable_nri = false
	I1105 18:37:38.915610   44959 command_runner.go:130] > # NRI socket to listen on.
	I1105 18:37:38.915616   44959 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1105 18:37:38.915620   44959 command_runner.go:130] > # NRI plugin directory to use.
	I1105 18:37:38.915625   44959 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1105 18:37:38.915630   44959 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1105 18:37:38.915637   44959 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1105 18:37:38.915642   44959 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1105 18:37:38.915651   44959 command_runner.go:130] > # nri_disable_connections = false
	I1105 18:37:38.915659   44959 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1105 18:37:38.915664   44959 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1105 18:37:38.915671   44959 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1105 18:37:38.915676   44959 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1105 18:37:38.915691   44959 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1105 18:37:38.915701   44959 command_runner.go:130] > [crio.stats]
	I1105 18:37:38.915709   44959 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1105 18:37:38.915715   44959 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1105 18:37:38.915721   44959 command_runner.go:130] > # stats_collection_period = 0
	I1105 18:37:38.915791   44959 cni.go:84] Creating CNI manager for ""
	I1105 18:37:38.915804   44959 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 18:37:38.915814   44959 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:37:38.915836   44959 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-501442 NodeName:multinode-501442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:37:38.915953   44959 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-501442"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.235"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:37:38.916010   44959 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:37:38.926318   44959 command_runner.go:130] > kubeadm
	I1105 18:37:38.926341   44959 command_runner.go:130] > kubectl
	I1105 18:37:38.926347   44959 command_runner.go:130] > kubelet
	I1105 18:37:38.926389   44959 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:37:38.926447   44959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 18:37:38.936004   44959 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1105 18:37:38.951622   44959 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:37:38.967674   44959 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1105 18:37:38.982931   44959 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I1105 18:37:38.986588   44959 command_runner.go:130] > 192.168.39.235	control-plane.minikube.internal
	I1105 18:37:38.986667   44959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:37:39.128149   44959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:37:39.142448   44959 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442 for IP: 192.168.39.235
	I1105 18:37:39.142471   44959 certs.go:194] generating shared ca certs ...
	I1105 18:37:39.142485   44959 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:37:39.142621   44959 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:37:39.142658   44959 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:37:39.142671   44959 certs.go:256] generating profile certs ...
	I1105 18:37:39.142782   44959 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/client.key
	I1105 18:37:39.142842   44959 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.key.eff842b3
	I1105 18:37:39.142883   44959 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.key
	I1105 18:37:39.142894   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:37:39.142909   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:37:39.142922   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:37:39.142932   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:37:39.142944   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:37:39.142956   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:37:39.142985   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:37:39.143008   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:37:39.143078   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:37:39.143111   44959 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:37:39.143120   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:37:39.143140   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:37:39.143165   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:37:39.143186   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:37:39.143224   44959 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:37:39.143248   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.143263   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem -> /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.143275   44959 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.143906   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:37:39.167141   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:37:39.189604   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:37:39.212310   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:37:39.234362   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 18:37:39.256019   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:37:39.277446   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:37:39.299055   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/multinode-501442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:37:39.321762   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:37:39.343625   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:37:39.364936   44959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:37:39.387370   44959 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:37:39.403453   44959 ssh_runner.go:195] Run: openssl version
	I1105 18:37:39.410196   44959 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1105 18:37:39.410301   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:37:39.421206   44959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.425419   44959 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.425482   44959 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.425533   44959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:37:39.430647   44959 command_runner.go:130] > b5213941
	I1105 18:37:39.430897   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:37:39.440438   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:37:39.450922   44959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.455078   44959 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.455105   44959 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.455150   44959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:37:39.460371   44959 command_runner.go:130] > 51391683
	I1105 18:37:39.460435   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:37:39.469482   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:37:39.480026   44959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.484154   44959 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.484280   44959 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.484335   44959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:37:39.489792   44959 command_runner.go:130] > 3ec20f2e
	I1105 18:37:39.489849   44959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:37:39.498719   44959 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:37:39.502725   44959 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:37:39.502761   44959 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1105 18:37:39.502771   44959 command_runner.go:130] > Device: 253,1	Inode: 5244462     Links: 1
	I1105 18:37:39.502783   44959 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1105 18:37:39.502789   44959 command_runner.go:130] > Access: 2024-11-05 18:30:59.480150353 +0000
	I1105 18:37:39.502796   44959 command_runner.go:130] > Modify: 2024-11-05 18:30:59.480150353 +0000
	I1105 18:37:39.502801   44959 command_runner.go:130] > Change: 2024-11-05 18:30:59.480150353 +0000
	I1105 18:37:39.502806   44959 command_runner.go:130] >  Birth: 2024-11-05 18:30:59.480150353 +0000
	I1105 18:37:39.502846   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 18:37:39.508231   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.508297   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 18:37:39.513472   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.513538   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 18:37:39.518848   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.518899   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 18:37:39.523761   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.523881   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 18:37:39.528893   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.529044   44959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 18:37:39.533964   44959 command_runner.go:130] > Certificate will not expire
	I1105 18:37:39.534163   44959 kubeadm.go:392] StartCluster: {Name:multinode-501442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-501442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:37:39.534281   44959 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:37:39.534325   44959 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:37:39.569543   44959 command_runner.go:130] > ff2c842c433a37cd2e6ebecf01dccc56471a33a1b32dd128ede3a626dad85eae
	I1105 18:37:39.569573   44959 command_runner.go:130] > bda4c5ff9760f31549d67318c9231b3c270f281ab22d59acb512f7f543dd9f6e
	I1105 18:37:39.569583   44959 command_runner.go:130] > 8436bf7ad36acfe8556093d25a9b978f7f5ecf4f1f6cf4f595b10a00156c17df
	I1105 18:37:39.569595   44959 command_runner.go:130] > 12d7011690bfd50d49711ecadafa040173ac51c10ed10a77c3b01174eece06d0
	I1105 18:37:39.569605   44959 command_runner.go:130] > 5640c6ad72f610faa2987de91e3c26eb08f329dbeff15858c90987541499001a
	I1105 18:37:39.569614   44959 command_runner.go:130] > bcf0c4abf9bd5d335fcecc197fab96b31e98221619aa5a323415a55a38229f7c
	I1105 18:37:39.569622   44959 command_runner.go:130] > a633ece5a868ea38a983b5f7f9f64208bfe44221954702c308b47c4c6edff92f
	I1105 18:37:39.569637   44959 command_runner.go:130] > 7ee0a777d11270b8edce25900ac6246070ebe29c0ef97881366503b66f874f55
	I1105 18:37:39.569664   44959 cri.go:89] found id: "ff2c842c433a37cd2e6ebecf01dccc56471a33a1b32dd128ede3a626dad85eae"
	I1105 18:37:39.569676   44959 cri.go:89] found id: "bda4c5ff9760f31549d67318c9231b3c270f281ab22d59acb512f7f543dd9f6e"
	I1105 18:37:39.569681   44959 cri.go:89] found id: "8436bf7ad36acfe8556093d25a9b978f7f5ecf4f1f6cf4f595b10a00156c17df"
	I1105 18:37:39.569687   44959 cri.go:89] found id: "12d7011690bfd50d49711ecadafa040173ac51c10ed10a77c3b01174eece06d0"
	I1105 18:37:39.569691   44959 cri.go:89] found id: "5640c6ad72f610faa2987de91e3c26eb08f329dbeff15858c90987541499001a"
	I1105 18:37:39.569696   44959 cri.go:89] found id: "bcf0c4abf9bd5d335fcecc197fab96b31e98221619aa5a323415a55a38229f7c"
	I1105 18:37:39.569703   44959 cri.go:89] found id: "a633ece5a868ea38a983b5f7f9f64208bfe44221954702c308b47c4c6edff92f"
	I1105 18:37:39.569707   44959 cri.go:89] found id: "7ee0a777d11270b8edce25900ac6246070ebe29c0ef97881366503b66f874f55"
	I1105 18:37:39.569711   44959 cri.go:89] found id: ""
	I1105 18:37:39.569764   44959 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-501442 -n multinode-501442
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-501442 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.16s)

                                                
                                    
x
+
TestPreload (194.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-091301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-091301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m28.909878825s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-091301 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-091301 image pull gcr.io/k8s-minikube/busybox: (3.209814432s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-091301
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-091301: (6.615388535s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-091301 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1105 18:47:31.419118   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-091301 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m32.721534758s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-091301 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-11-05 18:48:48.065667911 +0000 UTC m=+4061.738392596
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-091301 -n test-preload-091301
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-091301 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-091301 logs -n 25: (1.125243252s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442 sudo cat                                       | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m03_multinode-501442.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt                       | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m02:/home/docker/cp-test_multinode-501442-m03_multinode-501442-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n                                                                 | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | multinode-501442-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-501442 ssh -n multinode-501442-m02 sudo cat                                   | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-501442-m03_multinode-501442-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-501442 node stop m03                                                          | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:33 UTC |
	| node    | multinode-501442 node start                                                             | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:33 UTC | 05 Nov 24 18:34 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-501442                                                                | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:34 UTC |                     |
	| stop    | -p multinode-501442                                                                     | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:34 UTC |                     |
	| start   | -p multinode-501442                                                                     | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:36 UTC | 05 Nov 24 18:39 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-501442                                                                | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:39 UTC |                     |
	| node    | multinode-501442 node delete                                                            | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:39 UTC | 05 Nov 24 18:39 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-501442 stop                                                                   | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:39 UTC |                     |
	| start   | -p multinode-501442                                                                     | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:41 UTC | 05 Nov 24 18:44 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-501442                                                                | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:44 UTC |                     |
	| start   | -p multinode-501442-m02                                                                 | multinode-501442-m02 | jenkins | v1.34.0 | 05 Nov 24 18:44 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-501442-m03                                                                 | multinode-501442-m03 | jenkins | v1.34.0 | 05 Nov 24 18:44 UTC | 05 Nov 24 18:45 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-501442                                                                 | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:45 UTC |                     |
	| delete  | -p multinode-501442-m03                                                                 | multinode-501442-m03 | jenkins | v1.34.0 | 05 Nov 24 18:45 UTC | 05 Nov 24 18:45 UTC |
	| delete  | -p multinode-501442                                                                     | multinode-501442     | jenkins | v1.34.0 | 05 Nov 24 18:45 UTC | 05 Nov 24 18:45 UTC |
	| start   | -p test-preload-091301                                                                  | test-preload-091301  | jenkins | v1.34.0 | 05 Nov 24 18:45 UTC | 05 Nov 24 18:47 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-091301 image pull                                                          | test-preload-091301  | jenkins | v1.34.0 | 05 Nov 24 18:47 UTC | 05 Nov 24 18:47 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-091301                                                                  | test-preload-091301  | jenkins | v1.34.0 | 05 Nov 24 18:47 UTC | 05 Nov 24 18:47 UTC |
	| start   | -p test-preload-091301                                                                  | test-preload-091301  | jenkins | v1.34.0 | 05 Nov 24 18:47 UTC | 05 Nov 24 18:48 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-091301 image list                                                          | test-preload-091301  | jenkins | v1.34.0 | 05 Nov 24 18:48 UTC | 05 Nov 24 18:48 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:47:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:47:15.163610   49279 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:47:15.163861   49279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:47:15.163871   49279 out.go:358] Setting ErrFile to fd 2...
	I1105 18:47:15.163876   49279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:47:15.164049   49279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:47:15.164554   49279 out.go:352] Setting JSON to false
	I1105 18:47:15.165431   49279 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5377,"bootTime":1730827058,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:47:15.165522   49279 start.go:139] virtualization: kvm guest
	I1105 18:47:15.167718   49279 out.go:177] * [test-preload-091301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:47:15.169006   49279 notify.go:220] Checking for updates...
	I1105 18:47:15.169021   49279 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:47:15.170280   49279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:47:15.171509   49279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:47:15.172856   49279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:47:15.174339   49279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:47:15.175609   49279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:47:15.177257   49279 config.go:182] Loaded profile config "test-preload-091301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1105 18:47:15.177908   49279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:47:15.177956   49279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:47:15.193247   49279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I1105 18:47:15.193776   49279 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:47:15.194295   49279 main.go:141] libmachine: Using API Version  1
	I1105 18:47:15.194313   49279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:47:15.194679   49279 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:47:15.194857   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:47:15.196697   49279 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1105 18:47:15.198046   49279 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:47:15.198327   49279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:47:15.198360   49279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:47:15.212542   49279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I1105 18:47:15.212972   49279 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:47:15.213408   49279 main.go:141] libmachine: Using API Version  1
	I1105 18:47:15.213428   49279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:47:15.213701   49279 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:47:15.213863   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:47:15.248575   49279 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:47:15.249770   49279 start.go:297] selected driver: kvm2
	I1105 18:47:15.249784   49279 start.go:901] validating driver "kvm2" against &{Name:test-preload-091301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-091301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:47:15.249879   49279 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:47:15.250558   49279 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:47:15.250635   49279 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:47:15.265448   49279 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:47:15.265809   49279 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:47:15.265839   49279 cni.go:84] Creating CNI manager for ""
	I1105 18:47:15.265884   49279 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:47:15.265925   49279 start.go:340] cluster config:
	{Name:test-preload-091301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-091301 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:47:15.266038   49279 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:47:15.267734   49279 out.go:177] * Starting "test-preload-091301" primary control-plane node in "test-preload-091301" cluster
	I1105 18:47:15.268925   49279 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1105 18:47:15.382886   49279 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1105 18:47:15.382930   49279 cache.go:56] Caching tarball of preloaded images
	I1105 18:47:15.383119   49279 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1105 18:47:15.384874   49279 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1105 18:47:15.386130   49279 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1105 18:47:15.491988   49279 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1105 18:47:27.012140   49279 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1105 18:47:27.012257   49279 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1105 18:47:27.851494   49279 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1105 18:47:27.851627   49279 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/config.json ...
	I1105 18:47:27.851870   49279 start.go:360] acquireMachinesLock for test-preload-091301: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:47:27.851935   49279 start.go:364] duration metric: took 41.649µs to acquireMachinesLock for "test-preload-091301"
	I1105 18:47:27.851948   49279 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:47:27.851956   49279 fix.go:54] fixHost starting: 
	I1105 18:47:27.852216   49279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:47:27.852250   49279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:47:27.867061   49279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I1105 18:47:27.867514   49279 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:47:27.868014   49279 main.go:141] libmachine: Using API Version  1
	I1105 18:47:27.868037   49279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:47:27.868361   49279 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:47:27.868536   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:47:27.868705   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetState
	I1105 18:47:27.870311   49279 fix.go:112] recreateIfNeeded on test-preload-091301: state=Stopped err=<nil>
	I1105 18:47:27.870341   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	W1105 18:47:27.870500   49279 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:47:27.872712   49279 out.go:177] * Restarting existing kvm2 VM for "test-preload-091301" ...
	I1105 18:47:27.873954   49279 main.go:141] libmachine: (test-preload-091301) Calling .Start
	I1105 18:47:27.874141   49279 main.go:141] libmachine: (test-preload-091301) Ensuring networks are active...
	I1105 18:47:27.874954   49279 main.go:141] libmachine: (test-preload-091301) Ensuring network default is active
	I1105 18:47:27.875310   49279 main.go:141] libmachine: (test-preload-091301) Ensuring network mk-test-preload-091301 is active
	I1105 18:47:27.875637   49279 main.go:141] libmachine: (test-preload-091301) Getting domain xml...
	I1105 18:47:27.876408   49279 main.go:141] libmachine: (test-preload-091301) Creating domain...
	I1105 18:47:29.060294   49279 main.go:141] libmachine: (test-preload-091301) Waiting to get IP...
	I1105 18:47:29.061227   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:29.061601   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:29.061681   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:29.061591   49347 retry.go:31] will retry after 276.356582ms: waiting for machine to come up
	I1105 18:47:29.340217   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:29.340669   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:29.340694   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:29.340634   49347 retry.go:31] will retry after 371.872541ms: waiting for machine to come up
	I1105 18:47:29.714195   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:29.714608   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:29.714634   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:29.714553   49347 retry.go:31] will retry after 398.783509ms: waiting for machine to come up
	I1105 18:47:30.114892   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:30.115325   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:30.115350   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:30.115292   49347 retry.go:31] will retry after 410.988388ms: waiting for machine to come up
	I1105 18:47:30.527883   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:30.528258   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:30.528275   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:30.528228   49347 retry.go:31] will retry after 677.699131ms: waiting for machine to come up
	I1105 18:47:31.207108   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:31.207524   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:31.207555   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:31.207484   49347 retry.go:31] will retry after 928.166625ms: waiting for machine to come up
	I1105 18:47:32.137587   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:32.138001   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:32.138036   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:32.137926   49347 retry.go:31] will retry after 905.872761ms: waiting for machine to come up
	I1105 18:47:33.045311   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:33.045682   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:33.045735   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:33.045655   49347 retry.go:31] will retry after 1.357936083s: waiting for machine to come up
	I1105 18:47:34.405009   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:34.405466   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:34.405495   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:34.405431   49347 retry.go:31] will retry after 1.29247046s: waiting for machine to come up
	I1105 18:47:35.699124   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:35.699558   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:35.699583   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:35.699513   49347 retry.go:31] will retry after 2.243088789s: waiting for machine to come up
	I1105 18:47:37.944462   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:37.944898   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:37.944927   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:37.944786   49347 retry.go:31] will retry after 1.87397233s: waiting for machine to come up
	I1105 18:47:39.820734   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:39.821150   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:39.821173   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:39.821113   49347 retry.go:31] will retry after 3.547575257s: waiting for machine to come up
	I1105 18:47:43.369782   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:43.370255   49279 main.go:141] libmachine: (test-preload-091301) DBG | unable to find current IP address of domain test-preload-091301 in network mk-test-preload-091301
	I1105 18:47:43.370281   49279 main.go:141] libmachine: (test-preload-091301) DBG | I1105 18:47:43.370205   49347 retry.go:31] will retry after 3.498906964s: waiting for machine to come up
	I1105 18:47:46.872884   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:46.873361   49279 main.go:141] libmachine: (test-preload-091301) Found IP for machine: 192.168.39.235
	I1105 18:47:46.873388   49279 main.go:141] libmachine: (test-preload-091301) Reserving static IP address...
	I1105 18:47:46.873402   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has current primary IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:46.873856   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "test-preload-091301", mac: "52:54:00:a8:12:bd", ip: "192.168.39.235"} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:46.873879   49279 main.go:141] libmachine: (test-preload-091301) DBG | skip adding static IP to network mk-test-preload-091301 - found existing host DHCP lease matching {name: "test-preload-091301", mac: "52:54:00:a8:12:bd", ip: "192.168.39.235"}
	I1105 18:47:46.873892   49279 main.go:141] libmachine: (test-preload-091301) Reserved static IP address: 192.168.39.235
	I1105 18:47:46.873908   49279 main.go:141] libmachine: (test-preload-091301) Waiting for SSH to be available...
	I1105 18:47:46.873934   49279 main.go:141] libmachine: (test-preload-091301) DBG | Getting to WaitForSSH function...
	I1105 18:47:46.876207   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:46.876514   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:46.876539   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:46.876694   49279 main.go:141] libmachine: (test-preload-091301) DBG | Using SSH client type: external
	I1105 18:47:46.876716   49279 main.go:141] libmachine: (test-preload-091301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/test-preload-091301/id_rsa (-rw-------)
	I1105 18:47:46.876736   49279 main.go:141] libmachine: (test-preload-091301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/test-preload-091301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:47:46.876745   49279 main.go:141] libmachine: (test-preload-091301) DBG | About to run SSH command:
	I1105 18:47:46.876753   49279 main.go:141] libmachine: (test-preload-091301) DBG | exit 0
	I1105 18:47:47.002847   49279 main.go:141] libmachine: (test-preload-091301) DBG | SSH cmd err, output: <nil>: 
	I1105 18:47:47.003243   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetConfigRaw
	I1105 18:47:47.003810   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetIP
	I1105 18:47:47.006101   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.006414   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:47.006443   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.006653   49279 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/config.json ...
	I1105 18:47:47.006902   49279 machine.go:93] provisionDockerMachine start ...
	I1105 18:47:47.006924   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:47:47.007112   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:47.009039   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.009410   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:47.009435   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.009587   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:47:47.009755   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.009913   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.010052   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:47:47.010210   49279 main.go:141] libmachine: Using SSH client type: native
	I1105 18:47:47.010431   49279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:47:47.010447   49279 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:47:47.119008   49279 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 18:47:47.119039   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetMachineName
	I1105 18:47:47.119273   49279 buildroot.go:166] provisioning hostname "test-preload-091301"
	I1105 18:47:47.119299   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetMachineName
	I1105 18:47:47.119464   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:47.122207   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.122570   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:47.122595   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.122794   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:47:47.122963   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.123170   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.123297   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:47:47.123425   49279 main.go:141] libmachine: Using SSH client type: native
	I1105 18:47:47.123587   49279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:47:47.123599   49279 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-091301 && echo "test-preload-091301" | sudo tee /etc/hostname
	I1105 18:47:47.244371   49279 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-091301
	
	I1105 18:47:47.244411   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:47.247169   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.247491   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:47.247517   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.247665   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:47:47.247831   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.247966   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.248061   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:47:47.248163   49279 main.go:141] libmachine: Using SSH client type: native
	I1105 18:47:47.248359   49279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:47:47.248376   49279 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-091301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-091301/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-091301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:47:47.363019   49279 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:47:47.363048   49279 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:47:47.363081   49279 buildroot.go:174] setting up certificates
	I1105 18:47:47.363092   49279 provision.go:84] configureAuth start
	I1105 18:47:47.363104   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetMachineName
	I1105 18:47:47.363385   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetIP
	I1105 18:47:47.365879   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.366213   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:47.366244   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.366507   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:47.368716   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.369000   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:47.369040   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.369158   49279 provision.go:143] copyHostCerts
	I1105 18:47:47.369208   49279 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:47:47.369222   49279 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:47:47.369287   49279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:47:47.369394   49279 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:47:47.369404   49279 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:47:47.369433   49279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:47:47.369497   49279 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:47:47.369513   49279 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:47:47.369539   49279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:47:47.369601   49279 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.test-preload-091301 san=[127.0.0.1 192.168.39.235 localhost minikube test-preload-091301]
	I1105 18:47:47.519104   49279 provision.go:177] copyRemoteCerts
	I1105 18:47:47.519158   49279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:47:47.519220   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:47.521601   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.521891   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:47.521919   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.522125   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:47:47.522280   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.522419   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:47:47.522526   49279 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/test-preload-091301/id_rsa Username:docker}
	I1105 18:47:47.604422   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:47:47.626412   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1105 18:47:47.648440   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:47:47.670268   49279 provision.go:87] duration metric: took 307.16154ms to configureAuth
	I1105 18:47:47.670299   49279 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:47:47.670487   49279 config.go:182] Loaded profile config "test-preload-091301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1105 18:47:47.670568   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:47.673540   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.673866   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:47.673896   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.674023   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:47:47.674220   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.674376   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.674484   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:47:47.674631   49279 main.go:141] libmachine: Using SSH client type: native
	I1105 18:47:47.674779   49279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:47:47.674792   49279 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:47:47.904373   49279 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:47:47.904405   49279 machine.go:96] duration metric: took 897.488037ms to provisionDockerMachine
	I1105 18:47:47.904423   49279 start.go:293] postStartSetup for "test-preload-091301" (driver="kvm2")
	I1105 18:47:47.904439   49279 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:47:47.904487   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:47:47.904845   49279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:47:47.904873   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:47.907641   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.908006   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:47.908037   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:47.908218   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:47:47.908391   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:47.908523   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:47:47.908683   49279 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/test-preload-091301/id_rsa Username:docker}
	I1105 18:47:47.993125   49279 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:47:47.996822   49279 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:47:47.996841   49279 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:47:47.996901   49279 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:47:47.997001   49279 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:47:47.997114   49279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:47:48.006021   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:47:48.028193   49279 start.go:296] duration metric: took 123.755571ms for postStartSetup
	I1105 18:47:48.028231   49279 fix.go:56] duration metric: took 20.176274272s for fixHost
	I1105 18:47:48.028249   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:48.031254   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:48.031682   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:48.031717   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:48.031919   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:47:48.032115   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:48.032262   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:48.032401   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:47:48.032594   49279 main.go:141] libmachine: Using SSH client type: native
	I1105 18:47:48.032770   49279 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1105 18:47:48.032781   49279 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:47:48.139253   49279 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730832468.117181700
	
	I1105 18:47:48.139276   49279 fix.go:216] guest clock: 1730832468.117181700
	I1105 18:47:48.139284   49279 fix.go:229] Guest: 2024-11-05 18:47:48.1171817 +0000 UTC Remote: 2024-11-05 18:47:48.028234652 +0000 UTC m=+32.900871861 (delta=88.947048ms)
	I1105 18:47:48.139303   49279 fix.go:200] guest clock delta is within tolerance: 88.947048ms
	I1105 18:47:48.139308   49279 start.go:83] releasing machines lock for "test-preload-091301", held for 20.287365463s
	I1105 18:47:48.139326   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:47:48.139587   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetIP
	I1105 18:47:48.142098   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:48.142399   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:48.142426   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:48.142561   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:47:48.143042   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:47:48.143209   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:47:48.143299   49279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:47:48.143330   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:48.143548   49279 ssh_runner.go:195] Run: cat /version.json
	I1105 18:47:48.143571   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:47:48.145796   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:48.146113   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:48.146138   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:48.146259   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:48.146289   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:47:48.146456   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:48.146593   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:47:48.146654   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:48.146680   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:48.146704   49279 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/test-preload-091301/id_rsa Username:docker}
	I1105 18:47:48.146817   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:47:48.146953   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:47:48.147132   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:47:48.147284   49279 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/test-preload-091301/id_rsa Username:docker}
	I1105 18:47:48.256371   49279 ssh_runner.go:195] Run: systemctl --version
	I1105 18:47:48.262018   49279 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:47:48.406488   49279 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:47:48.411794   49279 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:47:48.411900   49279 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:47:48.427745   49279 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:47:48.427768   49279 start.go:495] detecting cgroup driver to use...
	I1105 18:47:48.427827   49279 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:47:48.442925   49279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:47:48.456369   49279 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:47:48.456414   49279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:47:48.469496   49279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:47:48.482618   49279 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:47:48.593652   49279 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:47:48.761089   49279 docker.go:233] disabling docker service ...
	I1105 18:47:48.761152   49279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:47:48.782333   49279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:47:48.796136   49279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:47:48.913024   49279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:47:49.021856   49279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:47:49.035326   49279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:47:49.052511   49279 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1105 18:47:49.052565   49279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:47:49.062103   49279 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:47:49.062162   49279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:47:49.071598   49279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:47:49.080830   49279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:47:49.090312   49279 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:47:49.100041   49279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:47:49.109545   49279 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:47:49.125308   49279 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:47:49.134647   49279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:47:49.143225   49279 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:47:49.143278   49279 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:47:49.155408   49279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:47:49.164182   49279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:47:49.275207   49279 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:47:49.359286   49279 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:47:49.359365   49279 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:47:49.363840   49279 start.go:563] Will wait 60s for crictl version
	I1105 18:47:49.363903   49279 ssh_runner.go:195] Run: which crictl
	I1105 18:47:49.367399   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:47:49.401666   49279 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:47:49.401746   49279 ssh_runner.go:195] Run: crio --version
	I1105 18:47:49.428333   49279 ssh_runner.go:195] Run: crio --version
	I1105 18:47:49.455364   49279 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1105 18:47:49.456533   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetIP
	I1105 18:47:49.459176   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:49.459534   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:47:49.459553   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:47:49.459815   49279 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:47:49.463696   49279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:47:49.475566   49279 kubeadm.go:883] updating cluster {Name:test-preload-091301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-091301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:47:49.475675   49279 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1105 18:47:49.475723   49279 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:47:49.508957   49279 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1105 18:47:49.509035   49279 ssh_runner.go:195] Run: which lz4
	I1105 18:47:49.512722   49279 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 18:47:49.516487   49279 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 18:47:49.516517   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1105 18:47:50.825200   49279 crio.go:462] duration metric: took 1.312505648s to copy over tarball
	I1105 18:47:50.825290   49279 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 18:47:53.157457   49279 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.332138735s)
	I1105 18:47:53.157491   49279 crio.go:469] duration metric: took 2.332252285s to extract the tarball
	I1105 18:47:53.157519   49279 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 18:47:53.197539   49279 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:47:53.241230   49279 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1105 18:47:53.241257   49279 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 18:47:53.241326   49279 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:47:53.241343   49279 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1105 18:47:53.241341   49279 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1105 18:47:53.241380   49279 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1105 18:47:53.241391   49279 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1105 18:47:53.241391   49279 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1105 18:47:53.241471   49279 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1105 18:47:53.241415   49279 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1105 18:47:53.242640   49279 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1105 18:47:53.242639   49279 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1105 18:47:53.242669   49279 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1105 18:47:53.242676   49279 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1105 18:47:53.242641   49279 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1105 18:47:53.242683   49279 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1105 18:47:53.242707   49279 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1105 18:47:53.242650   49279 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:47:53.456457   49279 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1105 18:47:53.480895   49279 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1105 18:47:53.483489   49279 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1105 18:47:53.488663   49279 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1105 18:47:53.496750   49279 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1105 18:47:53.501092   49279 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1105 18:47:53.513464   49279 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1105 18:47:53.518880   49279 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1105 18:47:53.518927   49279 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1105 18:47:53.518988   49279 ssh_runner.go:195] Run: which crictl
	I1105 18:47:53.592224   49279 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1105 18:47:53.592270   49279 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1105 18:47:53.592307   49279 ssh_runner.go:195] Run: which crictl
	I1105 18:47:53.628226   49279 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1105 18:47:53.628267   49279 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1105 18:47:53.628295   49279 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1105 18:47:53.628312   49279 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1105 18:47:53.628340   49279 ssh_runner.go:195] Run: which crictl
	I1105 18:47:53.628338   49279 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1105 18:47:53.628414   49279 ssh_runner.go:195] Run: which crictl
	I1105 18:47:53.628271   49279 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1105 18:47:53.628465   49279 ssh_runner.go:195] Run: which crictl
	I1105 18:47:53.628391   49279 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1105 18:47:53.628496   49279 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1105 18:47:53.628536   49279 ssh_runner.go:195] Run: which crictl
	I1105 18:47:53.649932   49279 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1105 18:47:53.649989   49279 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1105 18:47:53.650007   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1105 18:47:53.650032   49279 ssh_runner.go:195] Run: which crictl
	I1105 18:47:53.650069   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1105 18:47:53.650117   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1105 18:47:53.650151   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1105 18:47:53.650185   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1105 18:47:53.650223   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1105 18:47:53.765200   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1105 18:47:53.765266   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1105 18:47:53.765308   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1105 18:47:53.765366   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1105 18:47:53.765431   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1105 18:47:53.765497   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1105 18:47:53.765517   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1105 18:47:53.922279   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1105 18:47:53.922323   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1105 18:47:53.922355   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1105 18:47:53.922426   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1105 18:47:53.922469   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1105 18:47:53.922540   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1105 18:47:53.922590   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1105 18:47:54.072066   49279 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1105 18:47:54.072092   49279 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1105 18:47:54.072127   49279 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1105 18:47:54.072184   49279 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1105 18:47:54.072196   49279 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1105 18:47:54.072203   49279 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1105 18:47:54.072241   49279 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1105 18:47:54.072251   49279 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1105 18:47:54.072312   49279 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1105 18:47:54.072331   49279 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1105 18:47:54.072352   49279 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1105 18:47:54.072388   49279 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1105 18:47:54.072415   49279 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1105 18:47:54.087220   49279 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1105 18:47:54.087245   49279 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1105 18:47:54.087298   49279 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1105 18:47:54.087322   49279 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1105 18:47:54.113272   49279 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1105 18:47:54.113304   49279 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1105 18:47:54.113310   49279 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1105 18:47:54.113333   49279 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1105 18:47:54.113356   49279 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1105 18:47:54.113423   49279 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1105 18:47:54.519274   49279 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:47:57.251264   49279 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.137813868s)
	I1105 18:47:57.251300   49279 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1105 18:47:57.251333   49279 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.732030231s)
	I1105 18:47:57.251345   49279 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.16402343s)
	I1105 18:47:57.251369   49279 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1105 18:47:57.251403   49279 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1105 18:47:57.251455   49279 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1105 18:47:59.297765   49279 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.046288176s)
	I1105 18:47:59.297797   49279 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1105 18:47:59.297840   49279 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1105 18:47:59.297889   49279 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1105 18:48:00.134551   49279 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1105 18:48:00.134602   49279 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1105 18:48:00.134659   49279 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1105 18:48:00.577055   49279 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1105 18:48:00.577106   49279 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1105 18:48:00.577161   49279 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1105 18:48:00.723112   49279 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1105 18:48:00.723163   49279 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1105 18:48:00.723214   49279 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1105 18:48:01.060880   49279 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1105 18:48:01.060928   49279 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1105 18:48:01.060981   49279 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1105 18:48:01.703578   49279 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1105 18:48:01.703625   49279 cache_images.go:123] Successfully loaded all cached images
	I1105 18:48:01.703632   49279 cache_images.go:92] duration metric: took 8.462362709s to LoadCachedImages
	I1105 18:48:01.703646   49279 kubeadm.go:934] updating node { 192.168.39.235 8443 v1.24.4 crio true true} ...
	I1105 18:48:01.703744   49279 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-091301 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-091301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:48:01.703809   49279 ssh_runner.go:195] Run: crio config
	I1105 18:48:01.745807   49279 cni.go:84] Creating CNI manager for ""
	I1105 18:48:01.745835   49279 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:48:01.745848   49279 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:48:01.745873   49279 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-091301 NodeName:test-preload-091301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:48:01.746012   49279 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-091301"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:48:01.746075   49279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1105 18:48:01.755242   49279 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:48:01.755312   49279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 18:48:01.763942   49279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1105 18:48:01.779210   49279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:48:01.794031   49279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1105 18:48:01.809634   49279 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I1105 18:48:01.813165   49279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:48:01.824627   49279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:48:01.941254   49279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:48:01.957244   49279 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301 for IP: 192.168.39.235
	I1105 18:48:01.957268   49279 certs.go:194] generating shared ca certs ...
	I1105 18:48:01.957287   49279 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:48:01.957443   49279 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:48:01.957484   49279 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:48:01.957495   49279 certs.go:256] generating profile certs ...
	I1105 18:48:01.957570   49279 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/client.key
	I1105 18:48:01.957633   49279 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/apiserver.key.261d4b2c
	I1105 18:48:01.957671   49279 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/proxy-client.key
	I1105 18:48:01.957783   49279 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:48:01.957809   49279 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:48:01.957820   49279 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:48:01.957849   49279 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:48:01.957882   49279 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:48:01.957916   49279 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:48:01.957968   49279 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:48:01.958685   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:48:02.002451   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:48:02.046162   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:48:02.080504   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:48:02.115485   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1105 18:48:02.146076   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:48:02.180809   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:48:02.204233   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:48:02.226287   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:48:02.247905   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:48:02.269333   49279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:48:02.290985   49279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:48:02.306841   49279 ssh_runner.go:195] Run: openssl version
	I1105 18:48:02.312212   49279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:48:02.322471   49279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:48:02.326519   49279 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:48:02.326566   49279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:48:02.331790   49279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:48:02.342261   49279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:48:02.352099   49279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:48:02.356067   49279 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:48:02.356116   49279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:48:02.361324   49279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:48:02.371693   49279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:48:02.382260   49279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:48:02.386533   49279 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:48:02.386587   49279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:48:02.392210   49279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:48:02.402717   49279 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:48:02.407042   49279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 18:48:02.412700   49279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 18:48:02.418269   49279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 18:48:02.423901   49279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 18:48:02.429289   49279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 18:48:02.434571   49279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 18:48:02.439862   49279 kubeadm.go:392] StartCluster: {Name:test-preload-091301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-091301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:48:02.439953   49279 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:48:02.439994   49279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:48:02.474781   49279 cri.go:89] found id: ""
	I1105 18:48:02.474845   49279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:48:02.484744   49279 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 18:48:02.484766   49279 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 18:48:02.484807   49279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 18:48:02.493965   49279 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 18:48:02.494369   49279 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-091301" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:48:02.494505   49279 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-091301" cluster setting kubeconfig missing "test-preload-091301" context setting]
	I1105 18:48:02.494810   49279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:48:02.495433   49279 kapi.go:59] client config for test-preload-091301: &rest.Config{Host:"https://192.168.39.235:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 18:48:02.496092   49279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 18:48:02.505174   49279 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.235
	I1105 18:48:02.505206   49279 kubeadm.go:1160] stopping kube-system containers ...
	I1105 18:48:02.505217   49279 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 18:48:02.505280   49279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:48:02.539098   49279 cri.go:89] found id: ""
	I1105 18:48:02.539173   49279 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 18:48:02.554552   49279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:48:02.564465   49279 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:48:02.564496   49279 kubeadm.go:157] found existing configuration files:
	
	I1105 18:48:02.564548   49279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:48:02.573611   49279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:48:02.573662   49279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:48:02.583160   49279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:48:02.592013   49279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:48:02.592075   49279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:48:02.601201   49279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:48:02.609781   49279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:48:02.609839   49279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:48:02.618771   49279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:48:02.627236   49279 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:48:02.627295   49279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:48:02.636164   49279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 18:48:02.645294   49279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 18:48:02.743052   49279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 18:48:03.427660   49279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 18:48:03.673772   49279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 18:48:03.738589   49279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 18:48:03.802861   49279 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:48:03.802997   49279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:48:04.303181   49279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:48:04.803045   49279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:48:04.817435   49279 api_server.go:72] duration metric: took 1.014577601s to wait for apiserver process to appear ...
	I1105 18:48:04.817472   49279 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:48:04.817498   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:04.817913   49279 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": dial tcp 192.168.39.235:8443: connect: connection refused
	I1105 18:48:05.317789   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:10.318116   49279 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1105 18:48:10.318189   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:15.319325   49279 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1105 18:48:15.319368   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:20.319563   49279 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1105 18:48:20.319604   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:25.320671   49279 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1105 18:48:25.320723   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:25.701711   49279 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": read tcp 192.168.39.1:34172->192.168.39.235:8443: read: connection reset by peer
	I1105 18:48:25.817913   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:25.818493   49279 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": dial tcp 192.168.39.235:8443: connect: connection refused
	I1105 18:48:26.318046   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:28.916784   49279 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 18:48:28.916812   49279 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 18:48:28.916846   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:29.042341   49279 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:48:29.042380   49279 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:48:29.317691   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:29.323439   49279 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:48:29.323465   49279 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:48:29.818043   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:29.828631   49279 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:48:29.828664   49279 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:48:30.318303   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:30.330331   49279 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I1105 18:48:30.337305   49279 api_server.go:141] control plane version: v1.24.4
	I1105 18:48:30.337332   49279 api_server.go:131] duration metric: took 25.519852791s to wait for apiserver health ...
	I1105 18:48:30.337340   49279 cni.go:84] Creating CNI manager for ""
	I1105 18:48:30.337346   49279 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:48:30.339069   49279 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 18:48:30.340371   49279 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 18:48:30.350573   49279 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 18:48:30.367611   49279 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:48:30.367679   49279 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 18:48:30.367691   49279 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 18:48:30.386232   49279 system_pods.go:59] 8 kube-system pods found
	I1105 18:48:30.386262   49279 system_pods.go:61] "coredns-6d4b75cb6d-w5j97" [809c66c9-196c-49be-a09e-33ca9d290d1e] Running
	I1105 18:48:30.386267   49279 system_pods.go:61] "coredns-6d4b75cb6d-xc4qx" [934397a8-9a26-4e61-a47a-57260dc98dfb] Running
	I1105 18:48:30.386277   49279 system_pods.go:61] "etcd-test-preload-091301" [a37d4dc2-7dd8-4de5-adf3-99bf4926427c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 18:48:30.386283   49279 system_pods.go:61] "kube-apiserver-test-preload-091301" [c5c16c74-f902-430e-b62d-961a35714a40] Running
	I1105 18:48:30.386290   49279 system_pods.go:61] "kube-controller-manager-test-preload-091301" [44e44209-89da-4536-be73-76a34a8f19bc] Running
	I1105 18:48:30.386295   49279 system_pods.go:61] "kube-proxy-b9q6b" [6fa32791-7302-4da1-ad43-7fb1fb8ed3ba] Running
	I1105 18:48:30.386300   49279 system_pods.go:61] "kube-scheduler-test-preload-091301" [a9e93f1b-de3d-48e7-a917-2c8681ac5a85] Running
	I1105 18:48:30.386307   49279 system_pods.go:61] "storage-provisioner" [98259676-33af-4e48-9399-599c536a088e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 18:48:30.386313   49279 system_pods.go:74] duration metric: took 18.683244ms to wait for pod list to return data ...
	I1105 18:48:30.386321   49279 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:48:30.390928   49279 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:48:30.390960   49279 node_conditions.go:123] node cpu capacity is 2
	I1105 18:48:30.390990   49279 node_conditions.go:105] duration metric: took 4.663594ms to run NodePressure ...
	I1105 18:48:30.391010   49279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 18:48:30.712667   49279 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 18:48:30.722185   49279 kubeadm.go:739] kubelet initialised
	I1105 18:48:30.722209   49279 kubeadm.go:740] duration metric: took 9.513134ms waiting for restarted kubelet to initialise ...
	I1105 18:48:30.722217   49279 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:48:30.740507   49279 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-w5j97" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:30.749377   49279 pod_ready.go:98] node "test-preload-091301" hosting pod "coredns-6d4b75cb6d-w5j97" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:30.749406   49279 pod_ready.go:82] duration metric: took 8.873513ms for pod "coredns-6d4b75cb6d-w5j97" in "kube-system" namespace to be "Ready" ...
	E1105 18:48:30.749417   49279 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-091301" hosting pod "coredns-6d4b75cb6d-w5j97" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:30.749426   49279 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-xc4qx" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:30.758934   49279 pod_ready.go:98] node "test-preload-091301" hosting pod "coredns-6d4b75cb6d-xc4qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:30.758965   49279 pod_ready.go:82] duration metric: took 9.526786ms for pod "coredns-6d4b75cb6d-xc4qx" in "kube-system" namespace to be "Ready" ...
	E1105 18:48:30.758991   49279 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-091301" hosting pod "coredns-6d4b75cb6d-xc4qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:30.759000   49279 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:30.765863   49279 pod_ready.go:98] node "test-preload-091301" hosting pod "etcd-test-preload-091301" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:30.765884   49279 pod_ready.go:82] duration metric: took 6.873911ms for pod "etcd-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	E1105 18:48:30.765892   49279 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-091301" hosting pod "etcd-test-preload-091301" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:30.765901   49279 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:30.773950   49279 pod_ready.go:98] node "test-preload-091301" hosting pod "kube-apiserver-test-preload-091301" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:30.773978   49279 pod_ready.go:82] duration metric: took 8.062452ms for pod "kube-apiserver-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	E1105 18:48:30.773989   49279 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-091301" hosting pod "kube-apiserver-test-preload-091301" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:30.773997   49279 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:31.171190   49279 pod_ready.go:98] node "test-preload-091301" hosting pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:31.171219   49279 pod_ready.go:82] duration metric: took 397.210799ms for pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	E1105 18:48:31.171229   49279 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-091301" hosting pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:31.171235   49279 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b9q6b" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:31.572590   49279 pod_ready.go:98] node "test-preload-091301" hosting pod "kube-proxy-b9q6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:31.572633   49279 pod_ready.go:82] duration metric: took 401.376655ms for pod "kube-proxy-b9q6b" in "kube-system" namespace to be "Ready" ...
	E1105 18:48:31.572646   49279 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-091301" hosting pod "kube-proxy-b9q6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:31.572653   49279 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:31.972543   49279 pod_ready.go:98] node "test-preload-091301" hosting pod "kube-scheduler-test-preload-091301" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:31.972573   49279 pod_ready.go:82] duration metric: took 399.91255ms for pod "kube-scheduler-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	E1105 18:48:31.972586   49279 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-091301" hosting pod "kube-scheduler-test-preload-091301" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:31.972595   49279 pod_ready.go:39] duration metric: took 1.250368207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:48:31.972615   49279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 18:48:31.984205   49279 ops.go:34] apiserver oom_adj: -16
	I1105 18:48:31.984231   49279 kubeadm.go:597] duration metric: took 29.499459921s to restartPrimaryControlPlane
	I1105 18:48:31.984240   49279 kubeadm.go:394] duration metric: took 29.544385297s to StartCluster
	I1105 18:48:31.984255   49279 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:48:31.984316   49279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:48:31.984941   49279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:48:31.985162   49279 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:48:31.985244   49279 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 18:48:31.985364   49279 addons.go:69] Setting storage-provisioner=true in profile "test-preload-091301"
	I1105 18:48:31.985386   49279 addons.go:234] Setting addon storage-provisioner=true in "test-preload-091301"
	W1105 18:48:31.985395   49279 addons.go:243] addon storage-provisioner should already be in state true
	I1105 18:48:31.985392   49279 addons.go:69] Setting default-storageclass=true in profile "test-preload-091301"
	I1105 18:48:31.985419   49279 host.go:66] Checking if "test-preload-091301" exists ...
	I1105 18:48:31.985422   49279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-091301"
	I1105 18:48:31.985395   49279 config.go:182] Loaded profile config "test-preload-091301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1105 18:48:31.985707   49279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:48:31.985753   49279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:48:31.985831   49279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:48:31.985877   49279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:48:31.986993   49279 out.go:177] * Verifying Kubernetes components...
	I1105 18:48:31.988333   49279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:48:32.000995   49279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44025
	I1105 18:48:32.001008   49279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
	I1105 18:48:32.001459   49279 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:48:32.001460   49279 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:48:32.001991   49279 main.go:141] libmachine: Using API Version  1
	I1105 18:48:32.002008   49279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:48:32.002135   49279 main.go:141] libmachine: Using API Version  1
	I1105 18:48:32.002159   49279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:48:32.002337   49279 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:48:32.002483   49279 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:48:32.002626   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetState
	I1105 18:48:32.002839   49279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:48:32.002893   49279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:48:32.005023   49279 kapi.go:59] client config for test-preload-091301: &rest.Config{Host:"https://192.168.39.235:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/profiles/test-preload-091301/client.key", CAFile:"/home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2437da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 18:48:32.005358   49279 addons.go:234] Setting addon default-storageclass=true in "test-preload-091301"
	W1105 18:48:32.005379   49279 addons.go:243] addon default-storageclass should already be in state true
	I1105 18:48:32.005405   49279 host.go:66] Checking if "test-preload-091301" exists ...
	I1105 18:48:32.005790   49279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:48:32.005833   49279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:48:32.020650   49279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I1105 18:48:32.021101   49279 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:48:32.021739   49279 main.go:141] libmachine: Using API Version  1
	I1105 18:48:32.021764   49279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:48:32.022161   49279 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:48:32.022391   49279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33333
	I1105 18:48:32.022745   49279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:48:32.022787   49279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:48:32.022829   49279 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:48:32.023225   49279 main.go:141] libmachine: Using API Version  1
	I1105 18:48:32.023240   49279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:48:32.023577   49279 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:48:32.023787   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetState
	I1105 18:48:32.025692   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:48:32.027536   49279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:48:32.028884   49279 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:48:32.028907   49279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 18:48:32.028932   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:48:32.032226   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:48:32.032671   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:48:32.032697   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:48:32.032878   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:48:32.033040   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:48:32.033189   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:48:32.033335   49279 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/test-preload-091301/id_rsa Username:docker}
	I1105 18:48:32.065818   49279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36723
	I1105 18:48:32.066256   49279 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:48:32.066780   49279 main.go:141] libmachine: Using API Version  1
	I1105 18:48:32.066799   49279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:48:32.067143   49279 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:48:32.067360   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetState
	I1105 18:48:32.068978   49279 main.go:141] libmachine: (test-preload-091301) Calling .DriverName
	I1105 18:48:32.069202   49279 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 18:48:32.069217   49279 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 18:48:32.069232   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHHostname
	I1105 18:48:32.071949   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:48:32.072420   49279 main.go:141] libmachine: (test-preload-091301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:12:bd", ip: ""} in network mk-test-preload-091301: {Iface:virbr1 ExpiryTime:2024-11-05 19:47:38 +0000 UTC Type:0 Mac:52:54:00:a8:12:bd Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:test-preload-091301 Clientid:01:52:54:00:a8:12:bd}
	I1105 18:48:32.072447   49279 main.go:141] libmachine: (test-preload-091301) DBG | domain test-preload-091301 has defined IP address 192.168.39.235 and MAC address 52:54:00:a8:12:bd in network mk-test-preload-091301
	I1105 18:48:32.072611   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHPort
	I1105 18:48:32.072777   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHKeyPath
	I1105 18:48:32.072917   49279 main.go:141] libmachine: (test-preload-091301) Calling .GetSSHUsername
	I1105 18:48:32.073057   49279 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/test-preload-091301/id_rsa Username:docker}
	I1105 18:48:32.158506   49279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:48:32.174893   49279 node_ready.go:35] waiting up to 6m0s for node "test-preload-091301" to be "Ready" ...
	I1105 18:48:32.264485   49279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:48:32.281997   49279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 18:48:33.187336   49279 main.go:141] libmachine: Making call to close driver server
	I1105 18:48:33.187364   49279 main.go:141] libmachine: (test-preload-091301) Calling .Close
	I1105 18:48:33.187339   49279 main.go:141] libmachine: Making call to close driver server
	I1105 18:48:33.187428   49279 main.go:141] libmachine: (test-preload-091301) Calling .Close
	I1105 18:48:33.187635   49279 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:48:33.187669   49279 main.go:141] libmachine: (test-preload-091301) DBG | Closing plugin on server side
	I1105 18:48:33.187682   49279 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:48:33.187681   49279 main.go:141] libmachine: (test-preload-091301) DBG | Closing plugin on server side
	I1105 18:48:33.187692   49279 main.go:141] libmachine: Making call to close driver server
	I1105 18:48:33.187700   49279 main.go:141] libmachine: (test-preload-091301) Calling .Close
	I1105 18:48:33.187654   49279 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:48:33.187745   49279 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:48:33.187752   49279 main.go:141] libmachine: Making call to close driver server
	I1105 18:48:33.187759   49279 main.go:141] libmachine: (test-preload-091301) Calling .Close
	I1105 18:48:33.187877   49279 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:48:33.187901   49279 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:48:33.187961   49279 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:48:33.187975   49279 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:48:33.188011   49279 main.go:141] libmachine: (test-preload-091301) DBG | Closing plugin on server side
	I1105 18:48:33.196121   49279 main.go:141] libmachine: Making call to close driver server
	I1105 18:48:33.196139   49279 main.go:141] libmachine: (test-preload-091301) Calling .Close
	I1105 18:48:33.196405   49279 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:48:33.196421   49279 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:48:33.196436   49279 main.go:141] libmachine: (test-preload-091301) DBG | Closing plugin on server side
	I1105 18:48:33.198274   49279 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1105 18:48:33.199372   49279 addons.go:510] duration metric: took 1.214138659s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1105 18:48:34.178945   49279 node_ready.go:53] node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:36.678729   49279 node_ready.go:53] node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:39.179069   49279 node_ready.go:53] node "test-preload-091301" has status "Ready":"False"
	I1105 18:48:39.679151   49279 node_ready.go:49] node "test-preload-091301" has status "Ready":"True"
	I1105 18:48:39.679174   49279 node_ready.go:38] duration metric: took 7.504248194s for node "test-preload-091301" to be "Ready" ...
	I1105 18:48:39.679182   49279 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:48:39.684434   49279 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-w5j97" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:39.689869   49279 pod_ready.go:93] pod "coredns-6d4b75cb6d-w5j97" in "kube-system" namespace has status "Ready":"True"
	I1105 18:48:39.689891   49279 pod_ready.go:82] duration metric: took 5.426689ms for pod "coredns-6d4b75cb6d-w5j97" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:39.689902   49279 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:39.694700   49279 pod_ready.go:93] pod "etcd-test-preload-091301" in "kube-system" namespace has status "Ready":"True"
	I1105 18:48:39.694719   49279 pod_ready.go:82] duration metric: took 4.809551ms for pod "etcd-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:39.694728   49279 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:39.699647   49279 pod_ready.go:93] pod "kube-apiserver-test-preload-091301" in "kube-system" namespace has status "Ready":"True"
	I1105 18:48:39.699665   49279 pod_ready.go:82] duration metric: took 4.929593ms for pod "kube-apiserver-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:39.699675   49279 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:41.708964   49279 pod_ready.go:103] pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace has status "Ready":"False"
	I1105 18:48:44.205815   49279 pod_ready.go:103] pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace has status "Ready":"False"
	I1105 18:48:46.706068   49279 pod_ready.go:103] pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace has status "Ready":"False"
	I1105 18:48:47.706958   49279 pod_ready.go:93] pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace has status "Ready":"True"
	I1105 18:48:47.706996   49279 pod_ready.go:82] duration metric: took 8.00731371s for pod "kube-controller-manager-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:47.707007   49279 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b9q6b" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:47.712216   49279 pod_ready.go:93] pod "kube-proxy-b9q6b" in "kube-system" namespace has status "Ready":"True"
	I1105 18:48:47.712237   49279 pod_ready.go:82] duration metric: took 5.223609ms for pod "kube-proxy-b9q6b" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:47.712245   49279 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:47.716322   49279 pod_ready.go:93] pod "kube-scheduler-test-preload-091301" in "kube-system" namespace has status "Ready":"True"
	I1105 18:48:47.716339   49279 pod_ready.go:82] duration metric: took 4.088876ms for pod "kube-scheduler-test-preload-091301" in "kube-system" namespace to be "Ready" ...
	I1105 18:48:47.716348   49279 pod_ready.go:39] duration metric: took 8.037156495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:48:47.716360   49279 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:48:47.716421   49279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:48:47.730742   49279 api_server.go:72] duration metric: took 15.745553387s to wait for apiserver process to appear ...
	I1105 18:48:47.730765   49279 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:48:47.730779   49279 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:48:47.735614   49279 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I1105 18:48:47.736550   49279 api_server.go:141] control plane version: v1.24.4
	I1105 18:48:47.736570   49279 api_server.go:131] duration metric: took 5.799907ms to wait for apiserver health ...
	I1105 18:48:47.736577   49279 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:48:47.741814   49279 system_pods.go:59] 7 kube-system pods found
	I1105 18:48:47.741838   49279 system_pods.go:61] "coredns-6d4b75cb6d-w5j97" [809c66c9-196c-49be-a09e-33ca9d290d1e] Running
	I1105 18:48:47.741844   49279 system_pods.go:61] "etcd-test-preload-091301" [a37d4dc2-7dd8-4de5-adf3-99bf4926427c] Running
	I1105 18:48:47.741849   49279 system_pods.go:61] "kube-apiserver-test-preload-091301" [c5c16c74-f902-430e-b62d-961a35714a40] Running
	I1105 18:48:47.741854   49279 system_pods.go:61] "kube-controller-manager-test-preload-091301" [44e44209-89da-4536-be73-76a34a8f19bc] Running
	I1105 18:48:47.741858   49279 system_pods.go:61] "kube-proxy-b9q6b" [6fa32791-7302-4da1-ad43-7fb1fb8ed3ba] Running
	I1105 18:48:47.741867   49279 system_pods.go:61] "kube-scheduler-test-preload-091301" [a9e93f1b-de3d-48e7-a917-2c8681ac5a85] Running
	I1105 18:48:47.741873   49279 system_pods.go:61] "storage-provisioner" [98259676-33af-4e48-9399-599c536a088e] Running
	I1105 18:48:47.741880   49279 system_pods.go:74] duration metric: took 5.296911ms to wait for pod list to return data ...
	I1105 18:48:47.741890   49279 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:48:47.744126   49279 default_sa.go:45] found service account: "default"
	I1105 18:48:47.744143   49279 default_sa.go:55] duration metric: took 2.247143ms for default service account to be created ...
	I1105 18:48:47.744150   49279 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:48:47.749045   49279 system_pods.go:86] 7 kube-system pods found
	I1105 18:48:47.749066   49279 system_pods.go:89] "coredns-6d4b75cb6d-w5j97" [809c66c9-196c-49be-a09e-33ca9d290d1e] Running
	I1105 18:48:47.749073   49279 system_pods.go:89] "etcd-test-preload-091301" [a37d4dc2-7dd8-4de5-adf3-99bf4926427c] Running
	I1105 18:48:47.749077   49279 system_pods.go:89] "kube-apiserver-test-preload-091301" [c5c16c74-f902-430e-b62d-961a35714a40] Running
	I1105 18:48:47.749080   49279 system_pods.go:89] "kube-controller-manager-test-preload-091301" [44e44209-89da-4536-be73-76a34a8f19bc] Running
	I1105 18:48:47.749084   49279 system_pods.go:89] "kube-proxy-b9q6b" [6fa32791-7302-4da1-ad43-7fb1fb8ed3ba] Running
	I1105 18:48:47.749087   49279 system_pods.go:89] "kube-scheduler-test-preload-091301" [a9e93f1b-de3d-48e7-a917-2c8681ac5a85] Running
	I1105 18:48:47.749090   49279 system_pods.go:89] "storage-provisioner" [98259676-33af-4e48-9399-599c536a088e] Running
	I1105 18:48:47.749095   49279 system_pods.go:126] duration metric: took 4.941255ms to wait for k8s-apps to be running ...
	I1105 18:48:47.749101   49279 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:48:47.749145   49279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:48:47.763389   49279 system_svc.go:56] duration metric: took 14.279613ms WaitForService to wait for kubelet
	I1105 18:48:47.763417   49279 kubeadm.go:582] duration metric: took 15.778225546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:48:47.763432   49279 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:48:47.766872   49279 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:48:47.766894   49279 node_conditions.go:123] node cpu capacity is 2
	I1105 18:48:47.766903   49279 node_conditions.go:105] duration metric: took 3.466746ms to run NodePressure ...
	I1105 18:48:47.766913   49279 start.go:241] waiting for startup goroutines ...
	I1105 18:48:47.766920   49279 start.go:246] waiting for cluster config update ...
	I1105 18:48:47.766929   49279 start.go:255] writing updated cluster config ...
	I1105 18:48:47.767212   49279 ssh_runner.go:195] Run: rm -f paused
	I1105 18:48:47.812781   49279 start.go:600] kubectl: 1.31.2, cluster: 1.24.4 (minor skew: 7)
	I1105 18:48:47.814463   49279 out.go:201] 
	W1105 18:48:47.815560   49279 out.go:270] ! /usr/local/bin/kubectl is version 1.31.2, which may have incompatibilities with Kubernetes 1.24.4.
	I1105 18:48:47.816596   49279 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1105 18:48:47.817864   49279 out.go:177] * Done! kubectl is now configured to use "test-preload-091301" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.664345054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730832528664321636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17013578-8a06-44ac-af0c-0827c5cf2649 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.664912338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17572a2f-6c40-467f-80e8-834927674036 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.664983500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17572a2f-6c40-467f-80e8-834927674036 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.665194526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413dd0cb71e6bbb4b3cc7971c447d01cd8f10326446eeb89c40680bd8a3a02fd,PodSandboxId:3632502fc4b6b81fe8edc5dc49edc4b2fa1d74354e6091e7c39dd1aabbd7b53c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730832517708953212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w5j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 809c66c9-196c-49be-a09e-33ca9d290d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 829736c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f8546e8b2d19264c1af64095de1d3cc6b62efb77a13f6c1f4a0545cce954d9,PodSandboxId:2587711d28a173fe30b900eaa6de9907c479597856387be96dc48d42232dd93f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730832510845198040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9q6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fa32791-7302-4da1-ad43-7fb1fb8ed3ba,},Annotations:map[string]string{io.kubernetes.container.hash: bb5e0646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686f182f9afd5018c9ddf68893444aece100a412134e616eccd86af9b61d1754,PodSandboxId:b9e1c90ee009c668814670889f90d39288cdd40be4178c6d4fbbb042a95b7f9b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730832510572740626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
259676-33af-4e48-9399-599c536a088e,},Annotations:map[string]string{io.kubernetes.container.hash: 531fe77d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d447d66e128e3d30854238c428bd18fa4b727405099d71eeee2223e33bac604,PodSandboxId:ce77eb193413a5bbe0d8f0471e58a9a433187a7e3ff433a5070edcae48b09a7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730832510005084765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 3bc55ff89c1ea641803367ce57564d31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c1c4955c0e2247641817d6f3c6a8fc94d91a53c3fdb8bfffcc19964331d528,PodSandboxId:b6fb3ec3c4a58fc2122c55d477289202fb44240d8556063f22df3207d39d2cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730832505983373696,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b7a91ee8fbce45582dd0907f6d65fecc,},Annotations:map[string]string{io.kubernetes.container.hash: af3d6b47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e085b96eff01f2619a1d1ba4adbd6155a2c6555a5733355dc1e57887da761613,PodSandboxId:4d94410f5e348f1d54fdd26718c86d449651c103f0fe1d201d9b3eb5542b03f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730832504181443753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11ed3e0aec3f12126d4da79c61ef9b9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 6b2555a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4986bb81829b64ec38f9e2ffffda9553fcc1141b3020eeed2ba702407d33e9,PodSandboxId:c8d9bce8150ad28465b57c18d1a62504c706be0814f82233e5ff82877c4eb269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730832484512160883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487d143db49caa7d5a43cbcae040b0e,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06183b252e345f8de25269c8751b19ec95026df4ef5c4abf351c3b70e3caa39,PodSandboxId:ce77eb193413a5bbe0d8f0471e58a9a433187a7e3ff433a5070edcae48b09a7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1730832484468519333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc55ff89c1ea641803367ce57564d
31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88fde2665b54895bf3f7864412315eae3f666cf68f6021706a74da70c620f687,PodSandboxId:b6fb3ec3c4a58fc2122c55d477289202fb44240d8556063f22df3207d39d2cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1730832484446343408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7a91ee8fbce45582dd0907f6d65fecc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: af3d6b47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17572a2f-6c40-467f-80e8-834927674036 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.705374338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8c93142-f3d1-4994-9110-be74604c3bb1 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.705447803Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8c93142-f3d1-4994-9110-be74604c3bb1 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.706550319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f354398-2a7f-44e3-ba7d-48043a6cbaa0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.707093810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730832528707071153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f354398-2a7f-44e3-ba7d-48043a6cbaa0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.707655794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03a061ef-1fa5-48c7-9b3f-651ae9c7374c name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.707741350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03a061ef-1fa5-48c7-9b3f-651ae9c7374c name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.707938203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413dd0cb71e6bbb4b3cc7971c447d01cd8f10326446eeb89c40680bd8a3a02fd,PodSandboxId:3632502fc4b6b81fe8edc5dc49edc4b2fa1d74354e6091e7c39dd1aabbd7b53c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730832517708953212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w5j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 809c66c9-196c-49be-a09e-33ca9d290d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 829736c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f8546e8b2d19264c1af64095de1d3cc6b62efb77a13f6c1f4a0545cce954d9,PodSandboxId:2587711d28a173fe30b900eaa6de9907c479597856387be96dc48d42232dd93f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730832510845198040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9q6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fa32791-7302-4da1-ad43-7fb1fb8ed3ba,},Annotations:map[string]string{io.kubernetes.container.hash: bb5e0646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686f182f9afd5018c9ddf68893444aece100a412134e616eccd86af9b61d1754,PodSandboxId:b9e1c90ee009c668814670889f90d39288cdd40be4178c6d4fbbb042a95b7f9b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730832510572740626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
259676-33af-4e48-9399-599c536a088e,},Annotations:map[string]string{io.kubernetes.container.hash: 531fe77d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d447d66e128e3d30854238c428bd18fa4b727405099d71eeee2223e33bac604,PodSandboxId:ce77eb193413a5bbe0d8f0471e58a9a433187a7e3ff433a5070edcae48b09a7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730832510005084765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 3bc55ff89c1ea641803367ce57564d31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c1c4955c0e2247641817d6f3c6a8fc94d91a53c3fdb8bfffcc19964331d528,PodSandboxId:b6fb3ec3c4a58fc2122c55d477289202fb44240d8556063f22df3207d39d2cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730832505983373696,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b7a91ee8fbce45582dd0907f6d65fecc,},Annotations:map[string]string{io.kubernetes.container.hash: af3d6b47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e085b96eff01f2619a1d1ba4adbd6155a2c6555a5733355dc1e57887da761613,PodSandboxId:4d94410f5e348f1d54fdd26718c86d449651c103f0fe1d201d9b3eb5542b03f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730832504181443753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11ed3e0aec3f12126d4da79c61ef9b9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 6b2555a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4986bb81829b64ec38f9e2ffffda9553fcc1141b3020eeed2ba702407d33e9,PodSandboxId:c8d9bce8150ad28465b57c18d1a62504c706be0814f82233e5ff82877c4eb269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730832484512160883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487d143db49caa7d5a43cbcae040b0e,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06183b252e345f8de25269c8751b19ec95026df4ef5c4abf351c3b70e3caa39,PodSandboxId:ce77eb193413a5bbe0d8f0471e58a9a433187a7e3ff433a5070edcae48b09a7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1730832484468519333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc55ff89c1ea641803367ce57564d
31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88fde2665b54895bf3f7864412315eae3f666cf68f6021706a74da70c620f687,PodSandboxId:b6fb3ec3c4a58fc2122c55d477289202fb44240d8556063f22df3207d39d2cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1730832484446343408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7a91ee8fbce45582dd0907f6d65fecc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: af3d6b47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03a061ef-1fa5-48c7-9b3f-651ae9c7374c name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.745007223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4ae8066-21f1-409a-bb9e-58c7b4af4f06 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.745079084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4ae8066-21f1-409a-bb9e-58c7b4af4f06 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.746718678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3c0ac67-441d-4eb2-a05a-ad34025f65bc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.747144003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730832528747122740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3c0ac67-441d-4eb2-a05a-ad34025f65bc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.747833922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec77e97b-fda6-49b6-a4eb-6c812ef5fd24 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.747884185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec77e97b-fda6-49b6-a4eb-6c812ef5fd24 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.748113059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413dd0cb71e6bbb4b3cc7971c447d01cd8f10326446eeb89c40680bd8a3a02fd,PodSandboxId:3632502fc4b6b81fe8edc5dc49edc4b2fa1d74354e6091e7c39dd1aabbd7b53c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730832517708953212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w5j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 809c66c9-196c-49be-a09e-33ca9d290d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 829736c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f8546e8b2d19264c1af64095de1d3cc6b62efb77a13f6c1f4a0545cce954d9,PodSandboxId:2587711d28a173fe30b900eaa6de9907c479597856387be96dc48d42232dd93f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730832510845198040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9q6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fa32791-7302-4da1-ad43-7fb1fb8ed3ba,},Annotations:map[string]string{io.kubernetes.container.hash: bb5e0646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686f182f9afd5018c9ddf68893444aece100a412134e616eccd86af9b61d1754,PodSandboxId:b9e1c90ee009c668814670889f90d39288cdd40be4178c6d4fbbb042a95b7f9b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730832510572740626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
259676-33af-4e48-9399-599c536a088e,},Annotations:map[string]string{io.kubernetes.container.hash: 531fe77d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d447d66e128e3d30854238c428bd18fa4b727405099d71eeee2223e33bac604,PodSandboxId:ce77eb193413a5bbe0d8f0471e58a9a433187a7e3ff433a5070edcae48b09a7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730832510005084765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 3bc55ff89c1ea641803367ce57564d31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c1c4955c0e2247641817d6f3c6a8fc94d91a53c3fdb8bfffcc19964331d528,PodSandboxId:b6fb3ec3c4a58fc2122c55d477289202fb44240d8556063f22df3207d39d2cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730832505983373696,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b7a91ee8fbce45582dd0907f6d65fecc,},Annotations:map[string]string{io.kubernetes.container.hash: af3d6b47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e085b96eff01f2619a1d1ba4adbd6155a2c6555a5733355dc1e57887da761613,PodSandboxId:4d94410f5e348f1d54fdd26718c86d449651c103f0fe1d201d9b3eb5542b03f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730832504181443753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11ed3e0aec3f12126d4da79c61ef9b9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 6b2555a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4986bb81829b64ec38f9e2ffffda9553fcc1141b3020eeed2ba702407d33e9,PodSandboxId:c8d9bce8150ad28465b57c18d1a62504c706be0814f82233e5ff82877c4eb269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730832484512160883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487d143db49caa7d5a43cbcae040b0e,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06183b252e345f8de25269c8751b19ec95026df4ef5c4abf351c3b70e3caa39,PodSandboxId:ce77eb193413a5bbe0d8f0471e58a9a433187a7e3ff433a5070edcae48b09a7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1730832484468519333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc55ff89c1ea641803367ce57564d
31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88fde2665b54895bf3f7864412315eae3f666cf68f6021706a74da70c620f687,PodSandboxId:b6fb3ec3c4a58fc2122c55d477289202fb44240d8556063f22df3207d39d2cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1730832484446343408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7a91ee8fbce45582dd0907f6d65fecc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: af3d6b47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec77e97b-fda6-49b6-a4eb-6c812ef5fd24 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.789545395Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c72b774-1a6e-41dd-a76f-d4d79094149f name=/runtime.v1.RuntimeService/Version
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.789663810Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c72b774-1a6e-41dd-a76f-d4d79094149f name=/runtime.v1.RuntimeService/Version
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.790582474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0d32a6c-e766-4407-89ad-140b6d2e4d58 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.791033633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730832528791011490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0d32a6c-e766-4407-89ad-140b6d2e4d58 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.791520296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=260716e3-4f6d-43e6-8791-239e26620ef6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.791571818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=260716e3-4f6d-43e6-8791-239e26620ef6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:48:48 test-preload-091301 crio[671]: time="2024-11-05 18:48:48.791847743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413dd0cb71e6bbb4b3cc7971c447d01cd8f10326446eeb89c40680bd8a3a02fd,PodSandboxId:3632502fc4b6b81fe8edc5dc49edc4b2fa1d74354e6091e7c39dd1aabbd7b53c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730832517708953212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w5j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 809c66c9-196c-49be-a09e-33ca9d290d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 829736c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f8546e8b2d19264c1af64095de1d3cc6b62efb77a13f6c1f4a0545cce954d9,PodSandboxId:2587711d28a173fe30b900eaa6de9907c479597856387be96dc48d42232dd93f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730832510845198040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9q6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fa32791-7302-4da1-ad43-7fb1fb8ed3ba,},Annotations:map[string]string{io.kubernetes.container.hash: bb5e0646,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686f182f9afd5018c9ddf68893444aece100a412134e616eccd86af9b61d1754,PodSandboxId:b9e1c90ee009c668814670889f90d39288cdd40be4178c6d4fbbb042a95b7f9b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730832510572740626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
259676-33af-4e48-9399-599c536a088e,},Annotations:map[string]string{io.kubernetes.container.hash: 531fe77d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d447d66e128e3d30854238c428bd18fa4b727405099d71eeee2223e33bac604,PodSandboxId:ce77eb193413a5bbe0d8f0471e58a9a433187a7e3ff433a5070edcae48b09a7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730832510005084765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 3bc55ff89c1ea641803367ce57564d31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c1c4955c0e2247641817d6f3c6a8fc94d91a53c3fdb8bfffcc19964331d528,PodSandboxId:b6fb3ec3c4a58fc2122c55d477289202fb44240d8556063f22df3207d39d2cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730832505983373696,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b7a91ee8fbce45582dd0907f6d65fecc,},Annotations:map[string]string{io.kubernetes.container.hash: af3d6b47,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e085b96eff01f2619a1d1ba4adbd6155a2c6555a5733355dc1e57887da761613,PodSandboxId:4d94410f5e348f1d54fdd26718c86d449651c103f0fe1d201d9b3eb5542b03f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730832504181443753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11ed3e0aec3f12126d4da79c61ef9b9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 6b2555a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4986bb81829b64ec38f9e2ffffda9553fcc1141b3020eeed2ba702407d33e9,PodSandboxId:c8d9bce8150ad28465b57c18d1a62504c706be0814f82233e5ff82877c4eb269,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730832484512160883,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7487d143db49caa7d5a43cbcae040b0e,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06183b252e345f8de25269c8751b19ec95026df4ef5c4abf351c3b70e3caa39,PodSandboxId:ce77eb193413a5bbe0d8f0471e58a9a433187a7e3ff433a5070edcae48b09a7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1730832484468519333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc55ff89c1ea641803367ce57564d
31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88fde2665b54895bf3f7864412315eae3f666cf68f6021706a74da70c620f687,PodSandboxId:b6fb3ec3c4a58fc2122c55d477289202fb44240d8556063f22df3207d39d2cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1730832484446343408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-091301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7a91ee8fbce45582dd0907f6d65fecc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: af3d6b47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=260716e3-4f6d-43e6-8791-239e26620ef6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	413dd0cb71e6b       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   11 seconds ago      Running             coredns                   1                   3632502fc4b6b       coredns-6d4b75cb6d-w5j97
	99f8546e8b2d1       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   18 seconds ago      Running             kube-proxy                1                   2587711d28a17       kube-proxy-b9q6b
	686f182f9afd5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 seconds ago      Running             storage-provisioner       1                   b9e1c90ee009c       storage-provisioner
	1d447d66e128e       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   2                   ce77eb193413a       kube-controller-manager-test-preload-091301
	25c1c4955c0e2       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            2                   b6fb3ec3c4a58       kube-apiserver-test-preload-091301
	e085b96eff01f       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   24 seconds ago      Running             etcd                      1                   4d94410f5e348       etcd-test-preload-091301
	be4986bb81829       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   44 seconds ago      Running             kube-scheduler            1                   c8d9bce8150ad       kube-scheduler-test-preload-091301
	d06183b252e34       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   44 seconds ago      Exited              kube-controller-manager   1                   ce77eb193413a       kube-controller-manager-test-preload-091301
	88fde2665b548       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   44 seconds ago      Exited              kube-apiserver            1                   b6fb3ec3c4a58       kube-apiserver-test-preload-091301
	
	
	==> coredns [413dd0cb71e6bbb4b3cc7971c447d01cd8f10326446eeb89c40680bd8a3a02fd] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:58598 - 54912 "HINFO IN 7206312385110844397.8010859224624220116. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012626145s
	
	
	==> describe nodes <==
	Name:               test-preload-091301
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-091301
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=test-preload-091301
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T18_46_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:46:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-091301
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:48:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:48:39 +0000   Tue, 05 Nov 2024 18:46:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:48:39 +0000   Tue, 05 Nov 2024 18:46:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:48:39 +0000   Tue, 05 Nov 2024 18:46:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:48:39 +0000   Tue, 05 Nov 2024 18:48:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    test-preload-091301
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 53aa7f7ff0cf429fb2af0555dcbc8cc3
	  System UUID:                53aa7f7f-f0cf-429f-b2af-0555dcbc8cc3
	  Boot ID:                    4d46b3f0-0330-46df-ab7b-d06e713c68ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-w5j97                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     108s
	  kube-system                 etcd-test-preload-091301                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m1s
	  kube-system                 kube-apiserver-test-preload-091301             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-test-preload-091301    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-b9q6b                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-test-preload-091301             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 2m9s               kubelet          Starting kubelet.
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m2s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m2s               kubelet          Node test-preload-091301 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s               kubelet          Node test-preload-091301 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s               kubelet          Node test-preload-091301 status is now: NodeHasSufficientPID
	  Normal  NodeReady                111s               kubelet          Node test-preload-091301 status is now: NodeReady
	  Normal  RegisteredNode           109s               node-controller  Node test-preload-091301 event: Registered Node test-preload-091301 in Controller
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node test-preload-091301 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node test-preload-091301 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x7 over 46s)  kubelet          Node test-preload-091301 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  46s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node test-preload-091301 event: Registered Node test-preload-091301 in Controller
	
	
	==> dmesg <==
	[Nov 5 18:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052237] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037870] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.812008] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.909052] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.508868] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.090456] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.064678] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051102] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.212853] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.109439] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.252307] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Nov 5 18:48] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.057699] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.662211] systemd-fstab-generator[1116]: Ignoring "noauto" option for root device
	[  +5.149186] kauditd_printk_skb: 95 callbacks suppressed
	[ +21.302830] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.999593] systemd-fstab-generator[1858]: Ignoring "noauto" option for root device
	[  +5.475475] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [e085b96eff01f2619a1d1ba4adbd6155a2c6555a5733355dc1e57887da761613] <==
	{"level":"info","ts":"2024-11-05T18:48:24.302Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"feb6ae41040cd9b8","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-11-05T18:48:24.303Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-11-05T18:48:24.303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 switched to configuration voters=(18354048925659093432)"}
	{"level":"info","ts":"2024-11-05T18:48:24.303Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1b3c53dd134e6187","local-member-id":"feb6ae41040cd9b8","added-peer-id":"feb6ae41040cd9b8","added-peer-peer-urls":["https://192.168.39.235:2380"]}
	{"level":"info","ts":"2024-11-05T18:48:24.304Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1b3c53dd134e6187","local-member-id":"feb6ae41040cd9b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T18:48:24.304Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T18:48:24.306Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-11-05T18:48:24.306Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-11-05T18:48:24.306Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-11-05T18:48:24.306Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"feb6ae41040cd9b8","initial-advertise-peer-urls":["https://192.168.39.235:2380"],"listen-peer-urls":["https://192.168.39.235:2380"],"advertise-client-urls":["https://192.168.39.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-11-05T18:48:24.307Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-11-05T18:48:26.193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-11-05T18:48:26.193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-11-05T18:48:26.193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgPreVoteResp from feb6ae41040cd9b8 at term 2"}
	{"level":"info","ts":"2024-11-05T18:48:26.193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became candidate at term 3"}
	{"level":"info","ts":"2024-11-05T18:48:26.193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgVoteResp from feb6ae41040cd9b8 at term 3"}
	{"level":"info","ts":"2024-11-05T18:48:26.193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became leader at term 3"}
	{"level":"info","ts":"2024-11-05T18:48:26.193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: feb6ae41040cd9b8 elected leader feb6ae41040cd9b8 at term 3"}
	{"level":"info","ts":"2024-11-05T18:48:26.194Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"feb6ae41040cd9b8","local-member-attributes":"{Name:test-preload-091301 ClientURLs:[https://192.168.39.235:2379]}","request-path":"/0/members/feb6ae41040cd9b8/attributes","cluster-id":"1b3c53dd134e6187","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T18:48:26.194Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:48:26.195Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-05T18:48:26.195Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:48:26.197Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.235:2379"}
	{"level":"info","ts":"2024-11-05T18:48:26.198Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T18:48:26.198Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:48:49 up 1 min,  0 users,  load average: 0.44, 0.16, 0.06
	Linux test-preload-091301 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [25c1c4955c0e2247641817d6f3c6a8fc94d91a53c3fdb8bfffcc19964331d528] <==
	I1105 18:48:28.902446       1 controller.go:85] Starting OpenAPI controller
	I1105 18:48:28.904879       1 controller.go:85] Starting OpenAPI V3 controller
	I1105 18:48:28.905014       1 naming_controller.go:291] Starting NamingConditionController
	I1105 18:48:28.905102       1 establishing_controller.go:76] Starting EstablishingController
	I1105 18:48:28.905179       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1105 18:48:28.905266       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1105 18:48:28.905298       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1105 18:48:28.976514       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1105 18:48:28.976917       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1105 18:48:28.976962       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 18:48:28.988351       1 cache.go:39] Caches are synced for autoregister controller
	I1105 18:48:28.993755       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1105 18:48:28.998650       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1105 18:48:29.002358       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1105 18:48:29.565692       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1105 18:48:29.881221       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1105 18:48:30.041512       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 18:48:30.588111       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1105 18:48:30.602190       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1105 18:48:30.665997       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1105 18:48:30.689096       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1105 18:48:30.696040       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1105 18:48:31.217319       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1105 18:48:41.663295       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:48:41.688975       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [88fde2665b54895bf3f7864412315eae3f666cf68f6021706a74da70c620f687] <==
	I1105 18:48:05.075645       1 server.go:558] external host was not specified, using 192.168.39.235
	I1105 18:48:05.080905       1 server.go:158] Version: v1.24.4
	I1105 18:48:05.081000       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:48:05.683334       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I1105 18:48:05.685235       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1105 18:48:05.685329       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I1105 18:48:05.686897       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1105 18:48:05.686991       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W1105 18:48:05.691230       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:06.680379       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:06.691570       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:07.681694       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:08.549456       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:09.573072       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:11.614364       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:11.689952       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:15.768001       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:16.184423       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:21.468781       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1105 18:48:22.322479       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E1105 18:48:25.691386       1 run.go:74] "command failed" err="context deadline exceeded"
	
	
	==> kube-controller-manager [1d447d66e128e3d30854238c428bd18fa4b727405099d71eeee2223e33bac604] <==
	I1105 18:48:41.690548       1 event.go:294] "Event occurred" object="test-preload-091301" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-091301 event: Registered Node test-preload-091301 in Controller"
	I1105 18:48:41.698668       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1105 18:48:41.698837       1 shared_informer.go:262] Caches are synced for disruption
	I1105 18:48:41.698983       1 disruption.go:371] Sending events to api server.
	I1105 18:48:41.709318       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1105 18:48:41.725506       1 shared_informer.go:262] Caches are synced for namespace
	I1105 18:48:41.728834       1 shared_informer.go:262] Caches are synced for daemon sets
	I1105 18:48:41.730503       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1105 18:48:41.731762       1 shared_informer.go:262] Caches are synced for TTL
	I1105 18:48:41.739191       1 shared_informer.go:262] Caches are synced for job
	I1105 18:48:41.741458       1 shared_informer.go:262] Caches are synced for cronjob
	I1105 18:48:41.747766       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1105 18:48:41.781231       1 shared_informer.go:262] Caches are synced for resource quota
	I1105 18:48:41.797810       1 shared_informer.go:262] Caches are synced for persistent volume
	I1105 18:48:41.802387       1 shared_informer.go:262] Caches are synced for stateful set
	I1105 18:48:41.803645       1 shared_informer.go:262] Caches are synced for PVC protection
	I1105 18:48:41.807011       1 shared_informer.go:262] Caches are synced for expand
	I1105 18:48:41.808159       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1105 18:48:41.811441       1 shared_informer.go:262] Caches are synced for ephemeral
	I1105 18:48:41.812697       1 shared_informer.go:262] Caches are synced for resource quota
	I1105 18:48:41.901049       1 shared_informer.go:262] Caches are synced for attach detach
	I1105 18:48:41.904223       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1105 18:48:42.350997       1 shared_informer.go:262] Caches are synced for garbage collector
	I1105 18:48:42.361668       1 shared_informer.go:262] Caches are synced for garbage collector
	I1105 18:48:42.361797       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-controller-manager [d06183b252e345f8de25269c8751b19ec95026df4ef5c4abf351c3b70e3caa39] <==
		/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc0001a4000, {0x4d02200?, 0xc000128148}, 0x902?)
		/usr/local/go/src/crypto/tls/conn.go:807 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc0001a4000, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:614 +0x116
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:582
	crypto/tls.(*Conn).Read(0xc0001a4000, {0xc000d4b000, 0x1000, 0x91a200?})
		/usr/local/go/src/crypto/tls/conn.go:1285 +0x16f
	bufio.(*Reader).Read(0xc0004ee840, {0xc000d2e2e0, 0x9, 0x936b82?})
		/usr/local/go/src/bufio/bufio.go:236 +0x1b4
	io.ReadAtLeast({0x4cf9b00, 0xc0004ee840}, {0xc000d2e2e0, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:331 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:350
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc000d2e2e0?, 0x9?, 0xc001ce8210?}, {0x4cf9b00?, 0xc0004ee840?})
		vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000d2e2a0)
		vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000d50f98)
		vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0004fa180)
		vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		vendor/golang.org/x/net/http2/transport.go:725 +0xa65
	
	
	==> kube-proxy [99f8546e8b2d19264c1af64095de1d3cc6b62efb77a13f6c1f4a0545cce954d9] <==
	I1105 18:48:31.150640       1 node.go:163] Successfully retrieved node IP: 192.168.39.235
	I1105 18:48:31.150836       1 server_others.go:138] "Detected node IP" address="192.168.39.235"
	I1105 18:48:31.150915       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1105 18:48:31.207519       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1105 18:48:31.207537       1 server_others.go:206] "Using iptables Proxier"
	I1105 18:48:31.207564       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1105 18:48:31.208115       1 server.go:661] "Version info" version="v1.24.4"
	I1105 18:48:31.208196       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:48:31.209507       1 config.go:317] "Starting service config controller"
	I1105 18:48:31.209804       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1105 18:48:31.209865       1 config.go:226] "Starting endpoint slice config controller"
	I1105 18:48:31.209883       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1105 18:48:31.213126       1 config.go:444] "Starting node config controller"
	I1105 18:48:31.213177       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1105 18:48:31.310791       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1105 18:48:31.310878       1 shared_informer.go:262] Caches are synced for service config
	I1105 18:48:31.313285       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [be4986bb81829b64ec38f9e2ffffda9553fcc1141b3020eeed2ba702407d33e9] <==
	I1105 18:48:05.897257       1 serving.go:348] Generated self-signed cert in-memory
	W1105 18:48:16.650282       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.39.235:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1105 18:48:16.650333       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1105 18:48:16.650345       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1105 18:48:28.934777       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1105 18:48:28.934888       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:48:28.945147       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1105 18:48:28.945340       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1105 18:48:28.945392       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 18:48:28.945442       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1105 18:48:29.145747       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 18:48:29 test-preload-091301 kubelet[1123]: I1105 18:48:29.817332    1123 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6fa32791-7302-4da1-ad43-7fb1fb8ed3ba-kube-proxy\") pod \"kube-proxy-b9q6b\" (UID: \"6fa32791-7302-4da1-ad43-7fb1fb8ed3ba\") " pod="kube-system/kube-proxy-b9q6b"
	Nov 05 18:48:29 test-preload-091301 kubelet[1123]: I1105 18:48:29.817439    1123 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/809c66c9-196c-49be-a09e-33ca9d290d1e-config-volume\") pod \"coredns-6d4b75cb6d-w5j97\" (UID: \"809c66c9-196c-49be-a09e-33ca9d290d1e\") " pod="kube-system/coredns-6d4b75cb6d-w5j97"
	Nov 05 18:48:29 test-preload-091301 kubelet[1123]: I1105 18:48:29.817497    1123 reconciler.go:159] "Reconciler: start to sync state"
	Nov 05 18:48:29 test-preload-091301 kubelet[1123]: E1105 18:48:29.815734    1123 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-w5j97" podUID=809c66c9-196c-49be-a09e-33ca9d290d1e
	Nov 05 18:48:29 test-preload-091301 kubelet[1123]: E1105 18:48:29.924115    1123 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 05 18:48:29 test-preload-091301 kubelet[1123]: E1105 18:48:29.924234    1123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/934397a8-9a26-4e61-a47a-57260dc98dfb-config-volume podName:934397a8-9a26-4e61-a47a-57260dc98dfb nodeName:}" failed. No retries permitted until 2024-11-05 18:48:30.424201307 +0000 UTC m=+26.757370556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/934397a8-9a26-4e61-a47a-57260dc98dfb-config-volume") pod "coredns-6d4b75cb6d-xc4qx" (UID: "934397a8-9a26-4e61-a47a-57260dc98dfb") : object "kube-system"/"coredns" not registered
	Nov 05 18:48:29 test-preload-091301 kubelet[1123]: E1105 18:48:29.924662    1123 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 05 18:48:29 test-preload-091301 kubelet[1123]: E1105 18:48:29.924715    1123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/809c66c9-196c-49be-a09e-33ca9d290d1e-config-volume podName:809c66c9-196c-49be-a09e-33ca9d290d1e nodeName:}" failed. No retries permitted until 2024-11-05 18:48:30.424702693 +0000 UTC m=+26.757871926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/809c66c9-196c-49be-a09e-33ca9d290d1e-config-volume") pod "coredns-6d4b75cb6d-w5j97" (UID: "809c66c9-196c-49be-a09e-33ca9d290d1e") : object "kube-system"/"coredns" not registered
	Nov 05 18:48:29 test-preload-091301 kubelet[1123]: I1105 18:48:29.989964    1123 scope.go:110] "RemoveContainer" containerID="d06183b252e345f8de25269c8751b19ec95026df4ef5c4abf351c3b70e3caa39"
	Nov 05 18:48:30 test-preload-091301 kubelet[1123]: E1105 18:48:30.427317    1123 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 05 18:48:30 test-preload-091301 kubelet[1123]: E1105 18:48:30.427376    1123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/809c66c9-196c-49be-a09e-33ca9d290d1e-config-volume podName:809c66c9-196c-49be-a09e-33ca9d290d1e nodeName:}" failed. No retries permitted until 2024-11-05 18:48:31.427361791 +0000 UTC m=+27.760531038 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/809c66c9-196c-49be-a09e-33ca9d290d1e-config-volume") pod "coredns-6d4b75cb6d-w5j97" (UID: "809c66c9-196c-49be-a09e-33ca9d290d1e") : object "kube-system"/"coredns" not registered
	Nov 05 18:48:30 test-preload-091301 kubelet[1123]: E1105 18:48:30.427408    1123 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 05 18:48:30 test-preload-091301 kubelet[1123]: E1105 18:48:30.427425    1123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/934397a8-9a26-4e61-a47a-57260dc98dfb-config-volume podName:934397a8-9a26-4e61-a47a-57260dc98dfb nodeName:}" failed. No retries permitted until 2024-11-05 18:48:31.427418581 +0000 UTC m=+27.760587826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/934397a8-9a26-4e61-a47a-57260dc98dfb-config-volume") pod "coredns-6d4b75cb6d-xc4qx" (UID: "934397a8-9a26-4e61-a47a-57260dc98dfb") : object "kube-system"/"coredns" not registered
	Nov 05 18:48:30 test-preload-091301 kubelet[1123]: I1105 18:48:30.628802    1123 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9dbs\" (UniqueName: \"kubernetes.io/projected/934397a8-9a26-4e61-a47a-57260dc98dfb-kube-api-access-z9dbs\") pod \"934397a8-9a26-4e61-a47a-57260dc98dfb\" (UID: \"934397a8-9a26-4e61-a47a-57260dc98dfb\") "
	Nov 05 18:48:30 test-preload-091301 kubelet[1123]: I1105 18:48:30.632458    1123 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934397a8-9a26-4e61-a47a-57260dc98dfb-kube-api-access-z9dbs" (OuterVolumeSpecName: "kube-api-access-z9dbs") pod "934397a8-9a26-4e61-a47a-57260dc98dfb" (UID: "934397a8-9a26-4e61-a47a-57260dc98dfb"). InnerVolumeSpecName "kube-api-access-z9dbs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 05 18:48:30 test-preload-091301 kubelet[1123]: I1105 18:48:30.729545    1123 reconciler.go:384] "Volume detached for volume \"kube-api-access-z9dbs\" (UniqueName: \"kubernetes.io/projected/934397a8-9a26-4e61-a47a-57260dc98dfb-kube-api-access-z9dbs\") on node \"test-preload-091301\" DevicePath \"\""
	Nov 05 18:48:31 test-preload-091301 kubelet[1123]: E1105 18:48:31.433001    1123 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 05 18:48:31 test-preload-091301 kubelet[1123]: E1105 18:48:31.433078    1123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/934397a8-9a26-4e61-a47a-57260dc98dfb-config-volume podName:934397a8-9a26-4e61-a47a-57260dc98dfb nodeName:}" failed. No retries permitted until 2024-11-05 18:48:33.433061743 +0000 UTC m=+29.766230976 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/934397a8-9a26-4e61-a47a-57260dc98dfb-config-volume") pod "coredns-6d4b75cb6d-xc4qx" (UID: "934397a8-9a26-4e61-a47a-57260dc98dfb") : object "kube-system"/"coredns" not registered
	Nov 05 18:48:31 test-preload-091301 kubelet[1123]: E1105 18:48:31.433155    1123 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 05 18:48:31 test-preload-091301 kubelet[1123]: E1105 18:48:31.433177    1123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/809c66c9-196c-49be-a09e-33ca9d290d1e-config-volume podName:809c66c9-196c-49be-a09e-33ca9d290d1e nodeName:}" failed. No retries permitted until 2024-11-05 18:48:33.433170155 +0000 UTC m=+29.766339388 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/809c66c9-196c-49be-a09e-33ca9d290d1e-config-volume") pod "coredns-6d4b75cb6d-w5j97" (UID: "809c66c9-196c-49be-a09e-33ca9d290d1e") : object "kube-system"/"coredns" not registered
	Nov 05 18:48:31 test-preload-091301 kubelet[1123]: E1105 18:48:31.891811    1123 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-w5j97" podUID=809c66c9-196c-49be-a09e-33ca9d290d1e
	Nov 05 18:48:32 test-preload-091301 kubelet[1123]: I1105 18:48:32.138748    1123 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/934397a8-9a26-4e61-a47a-57260dc98dfb-config-volume\") on node \"test-preload-091301\" DevicePath \"\""
	Nov 05 18:48:33 test-preload-091301 kubelet[1123]: E1105 18:48:33.451211    1123 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 05 18:48:33 test-preload-091301 kubelet[1123]: E1105 18:48:33.451363    1123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/809c66c9-196c-49be-a09e-33ca9d290d1e-config-volume podName:809c66c9-196c-49be-a09e-33ca9d290d1e nodeName:}" failed. No retries permitted until 2024-11-05 18:48:37.451341485 +0000 UTC m=+33.784510730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/809c66c9-196c-49be-a09e-33ca9d290d1e-config-volume") pod "coredns-6d4b75cb6d-w5j97" (UID: "809c66c9-196c-49be-a09e-33ca9d290d1e") : object "kube-system"/"coredns" not registered
	Nov 05 18:48:33 test-preload-091301 kubelet[1123]: I1105 18:48:33.896907    1123 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=934397a8-9a26-4e61-a47a-57260dc98dfb path="/var/lib/kubelet/pods/934397a8-9a26-4e61-a47a-57260dc98dfb/volumes"
	
	
	==> storage-provisioner [686f182f9afd5018c9ddf68893444aece100a412134e616eccd86af9b61d1754] <==
	I1105 18:48:30.810888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-091301 -n test-preload-091301
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-091301 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-091301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-091301
--- FAIL: TestPreload (194.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (425.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-906991 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-906991 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m46.437631233s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-906991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-906991" primary control-plane node in "kubernetes-upgrade-906991" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:53:20.837744   55563 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:53:20.837846   55563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:53:20.837852   55563 out.go:358] Setting ErrFile to fd 2...
	I1105 18:53:20.837858   55563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:53:20.838077   55563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:53:20.838690   55563 out.go:352] Setting JSON to false
	I1105 18:53:20.839755   55563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5743,"bootTime":1730827058,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:53:20.839859   55563 start.go:139] virtualization: kvm guest
	I1105 18:53:20.842061   55563 out.go:177] * [kubernetes-upgrade-906991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:53:20.843332   55563 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:53:20.843343   55563 notify.go:220] Checking for updates...
	I1105 18:53:20.845864   55563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:53:20.847473   55563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:53:20.848885   55563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:53:20.850559   55563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:53:20.852007   55563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:53:20.854287   55563 config.go:182] Loaded profile config "NoKubernetes-048420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1105 18:53:20.854406   55563 config.go:182] Loaded profile config "cert-expiration-099467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:53:20.854519   55563 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:53:20.895892   55563 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 18:53:20.897077   55563 start.go:297] selected driver: kvm2
	I1105 18:53:20.897093   55563 start.go:901] validating driver "kvm2" against <nil>
	I1105 18:53:20.897106   55563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:53:20.897794   55563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:53:20.897917   55563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:53:20.913818   55563 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:53:20.913891   55563 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 18:53:20.914209   55563 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 18:53:20.914258   55563 cni.go:84] Creating CNI manager for ""
	I1105 18:53:20.914324   55563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:53:20.914351   55563 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 18:53:20.914464   55563 start.go:340] cluster config:
	{Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:53:20.914621   55563 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:53:20.916852   55563 out.go:177] * Starting "kubernetes-upgrade-906991" primary control-plane node in "kubernetes-upgrade-906991" cluster
	I1105 18:53:20.918246   55563 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 18:53:20.918294   55563 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 18:53:20.918304   55563 cache.go:56] Caching tarball of preloaded images
	I1105 18:53:20.918389   55563 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:53:20.918403   55563 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 18:53:20.918523   55563 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/config.json ...
	I1105 18:53:20.918546   55563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/config.json: {Name:mk5809fc39f91219e0c4aa524d5a05f5df35320f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:53:20.918716   55563 start.go:360] acquireMachinesLock for kubernetes-upgrade-906991: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:53:39.235642   55563 start.go:364] duration metric: took 18.316878432s to acquireMachinesLock for "kubernetes-upgrade-906991"
	I1105 18:53:39.235718   55563 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:53:39.235835   55563 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 18:53:39.237925   55563 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 18:53:39.238129   55563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:53:39.238176   55563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:53:39.255126   55563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45583
	I1105 18:53:39.255633   55563 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:53:39.256226   55563 main.go:141] libmachine: Using API Version  1
	I1105 18:53:39.256251   55563 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:53:39.256643   55563 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:53:39.256828   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetMachineName
	I1105 18:53:39.256956   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:53:39.257091   55563 start.go:159] libmachine.API.Create for "kubernetes-upgrade-906991" (driver="kvm2")
	I1105 18:53:39.257126   55563 client.go:168] LocalClient.Create starting
	I1105 18:53:39.257157   55563 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:53:39.257189   55563 main.go:141] libmachine: Decoding PEM data...
	I1105 18:53:39.257206   55563 main.go:141] libmachine: Parsing certificate...
	I1105 18:53:39.257258   55563 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:53:39.257276   55563 main.go:141] libmachine: Decoding PEM data...
	I1105 18:53:39.257287   55563 main.go:141] libmachine: Parsing certificate...
	I1105 18:53:39.257303   55563 main.go:141] libmachine: Running pre-create checks...
	I1105 18:53:39.257312   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .PreCreateCheck
	I1105 18:53:39.257698   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetConfigRaw
	I1105 18:53:39.258233   55563 main.go:141] libmachine: Creating machine...
	I1105 18:53:39.258248   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .Create
	I1105 18:53:39.258380   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Creating KVM machine...
	I1105 18:53:39.259656   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found existing default KVM network
	I1105 18:53:39.260831   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:39.260663   55704 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:28:61:5c} reservation:<nil>}
	I1105 18:53:39.261785   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:39.261707   55704 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:7c:81:aa} reservation:<nil>}
	I1105 18:53:39.262901   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:39.262826   55704 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003564d0}
	I1105 18:53:39.262951   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | created network xml: 
	I1105 18:53:39.262992   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | <network>
	I1105 18:53:39.263009   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG |   <name>mk-kubernetes-upgrade-906991</name>
	I1105 18:53:39.263021   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG |   <dns enable='no'/>
	I1105 18:53:39.263030   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG |   
	I1105 18:53:39.263041   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1105 18:53:39.263049   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG |     <dhcp>
	I1105 18:53:39.263057   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1105 18:53:39.263065   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG |     </dhcp>
	I1105 18:53:39.263071   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG |   </ip>
	I1105 18:53:39.263102   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG |   
	I1105 18:53:39.263128   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | </network>
	I1105 18:53:39.263140   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | 
	I1105 18:53:39.268630   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | trying to create private KVM network mk-kubernetes-upgrade-906991 192.168.61.0/24...
	I1105 18:53:39.334794   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | private KVM network mk-kubernetes-upgrade-906991 192.168.61.0/24 created
	I1105 18:53:39.334821   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991 ...
	I1105 18:53:39.334835   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:39.334792   55704 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:53:39.334847   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:53:39.334920   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:53:39.581255   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:39.581146   55704 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa...
	I1105 18:53:39.935819   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:39.935639   55704 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/kubernetes-upgrade-906991.rawdisk...
	I1105 18:53:39.935856   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Writing magic tar header
	I1105 18:53:39.935874   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991 (perms=drwx------)
	I1105 18:53:39.935893   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:53:39.935904   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:53:39.935918   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Writing SSH key tar header
	I1105 18:53:39.935936   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:39.935748   55704 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991 ...
	I1105 18:53:39.935962   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:53:39.935979   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:53:39.935995   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991
	I1105 18:53:39.936007   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:53:39.936020   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Creating domain...
	I1105 18:53:39.936064   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:53:39.936096   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:53:39.936113   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:53:39.936123   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:53:39.936135   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:53:39.936146   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Checking permissions on dir: /home
	I1105 18:53:39.936160   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Skipping /home - not owner
	I1105 18:53:39.937131   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) define libvirt domain using xml: 
	I1105 18:53:39.937167   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) <domain type='kvm'>
	I1105 18:53:39.937180   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   <name>kubernetes-upgrade-906991</name>
	I1105 18:53:39.937196   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   <memory unit='MiB'>2200</memory>
	I1105 18:53:39.937209   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   <vcpu>2</vcpu>
	I1105 18:53:39.937220   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   <features>
	I1105 18:53:39.937233   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <acpi/>
	I1105 18:53:39.937248   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <apic/>
	I1105 18:53:39.937260   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <pae/>
	I1105 18:53:39.937275   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     
	I1105 18:53:39.937286   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   </features>
	I1105 18:53:39.937297   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   <cpu mode='host-passthrough'>
	I1105 18:53:39.937309   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   
	I1105 18:53:39.937318   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   </cpu>
	I1105 18:53:39.937330   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   <os>
	I1105 18:53:39.937340   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <type>hvm</type>
	I1105 18:53:39.937363   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <boot dev='cdrom'/>
	I1105 18:53:39.937384   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <boot dev='hd'/>
	I1105 18:53:39.937394   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <bootmenu enable='no'/>
	I1105 18:53:39.937404   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   </os>
	I1105 18:53:39.937415   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   <devices>
	I1105 18:53:39.937424   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <disk type='file' device='cdrom'>
	I1105 18:53:39.937448   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/boot2docker.iso'/>
	I1105 18:53:39.937461   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <target dev='hdc' bus='scsi'/>
	I1105 18:53:39.937473   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <readonly/>
	I1105 18:53:39.937483   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     </disk>
	I1105 18:53:39.937491   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <disk type='file' device='disk'>
	I1105 18:53:39.937507   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:53:39.937525   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/kubernetes-upgrade-906991.rawdisk'/>
	I1105 18:53:39.937535   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <target dev='hda' bus='virtio'/>
	I1105 18:53:39.937552   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     </disk>
	I1105 18:53:39.937562   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <interface type='network'>
	I1105 18:53:39.937573   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <source network='mk-kubernetes-upgrade-906991'/>
	I1105 18:53:39.937595   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <model type='virtio'/>
	I1105 18:53:39.937608   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     </interface>
	I1105 18:53:39.937618   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <interface type='network'>
	I1105 18:53:39.937627   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <source network='default'/>
	I1105 18:53:39.937637   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <model type='virtio'/>
	I1105 18:53:39.937645   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     </interface>
	I1105 18:53:39.937659   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <serial type='pty'>
	I1105 18:53:39.937674   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <target port='0'/>
	I1105 18:53:39.937683   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     </serial>
	I1105 18:53:39.937692   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <console type='pty'>
	I1105 18:53:39.937703   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <target type='serial' port='0'/>
	I1105 18:53:39.937712   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     </console>
	I1105 18:53:39.937721   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     <rng model='virtio'>
	I1105 18:53:39.937738   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)       <backend model='random'>/dev/random</backend>
	I1105 18:53:39.937759   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     </rng>
	I1105 18:53:39.937767   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     
	I1105 18:53:39.937774   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)     
	I1105 18:53:39.937782   55563 main.go:141] libmachine: (kubernetes-upgrade-906991)   </devices>
	I1105 18:53:39.937786   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) </domain>
	I1105 18:53:39.937793   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) 
	I1105 18:53:39.941916   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:95:fa:de in network default
	I1105 18:53:39.942540   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Ensuring networks are active...
	I1105 18:53:39.942572   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:39.943312   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Ensuring network default is active
	I1105 18:53:39.943615   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Ensuring network mk-kubernetes-upgrade-906991 is active
	I1105 18:53:39.944204   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Getting domain xml...
	I1105 18:53:39.945121   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Creating domain...
	I1105 18:53:41.218745   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Waiting to get IP...
	I1105 18:53:41.219652   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:41.220167   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:41.220196   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:41.220136   55704 retry.go:31] will retry after 250.701997ms: waiting for machine to come up
	I1105 18:53:41.472454   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:41.472868   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:41.472911   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:41.472837   55704 retry.go:31] will retry after 364.096275ms: waiting for machine to come up
	I1105 18:53:41.838351   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:41.838803   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:41.838827   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:41.838757   55704 retry.go:31] will retry after 486.922431ms: waiting for machine to come up
	I1105 18:53:42.327412   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:42.327831   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:42.327863   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:42.327782   55704 retry.go:31] will retry after 386.571034ms: waiting for machine to come up
	I1105 18:53:42.716232   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:42.716711   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:42.716737   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:42.716669   55704 retry.go:31] will retry after 732.271944ms: waiting for machine to come up
	I1105 18:53:43.450328   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:43.450736   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:43.450755   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:43.450699   55704 retry.go:31] will retry after 803.111651ms: waiting for machine to come up
	I1105 18:53:44.255838   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:44.256400   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:44.256421   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:44.256346   55704 retry.go:31] will retry after 985.941829ms: waiting for machine to come up
	I1105 18:53:45.243993   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:45.244471   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:45.244499   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:45.244428   55704 retry.go:31] will retry after 1.443146326s: waiting for machine to come up
	I1105 18:53:46.688758   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:46.689371   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:46.689415   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:46.689314   55704 retry.go:31] will retry after 1.725860264s: waiting for machine to come up
	I1105 18:53:48.416347   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:48.416903   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:48.416926   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:48.416825   55704 retry.go:31] will retry after 2.044462649s: waiting for machine to come up
	I1105 18:53:50.462882   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:50.463387   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:50.463430   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:50.463353   55704 retry.go:31] will retry after 2.547100701s: waiting for machine to come up
	I1105 18:53:53.011633   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:53.012191   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:53.012216   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:53.012118   55704 retry.go:31] will retry after 2.841949978s: waiting for machine to come up
	I1105 18:53:55.855951   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:55.856370   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find current IP address of domain kubernetes-upgrade-906991 in network mk-kubernetes-upgrade-906991
	I1105 18:53:55.856391   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | I1105 18:53:55.856330   55704 retry.go:31] will retry after 4.056654544s: waiting for machine to come up
	I1105 18:53:59.914859   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:59.915358   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Found IP for machine: 192.168.61.130
	I1105 18:53:59.915383   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has current primary IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:59.915393   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Reserving static IP address...
	I1105 18:53:59.915744   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-906991", mac: "52:54:00:17:30:ab", ip: "192.168.61.130"} in network mk-kubernetes-upgrade-906991
	I1105 18:53:59.988835   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Getting to WaitForSSH function...
	I1105 18:53:59.988858   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Reserved static IP address: 192.168.61.130
	I1105 18:53:59.988898   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Waiting for SSH to be available...
	I1105 18:53:59.991316   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:59.991780   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:30:ab}
	I1105 18:53:59.991803   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:53:59.991957   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Using SSH client type: external
	I1105 18:53:59.991984   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa (-rw-------)
	I1105 18:53:59.992020   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:53:59.992033   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | About to run SSH command:
	I1105 18:53:59.992061   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | exit 0
	I1105 18:54:00.114785   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | SSH cmd err, output: <nil>: 
	I1105 18:54:00.115114   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) KVM machine creation complete!
	I1105 18:54:00.115397   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetConfigRaw
	I1105 18:54:00.115925   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:54:00.116107   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:54:00.116293   55563 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:54:00.116306   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetState
	I1105 18:54:00.117532   55563 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:54:00.117544   55563 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:54:00.117549   55563 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:54:00.117557   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:00.119922   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.120232   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:00.120259   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.120361   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:00.120539   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.120709   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.120848   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:00.120988   55563 main.go:141] libmachine: Using SSH client type: native
	I1105 18:54:00.121163   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:54:00.121172   55563 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:54:00.222178   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:54:00.222200   55563 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:54:00.222208   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:00.224910   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.225339   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:00.225404   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.225564   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:00.225749   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.225908   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.226049   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:00.226203   55563 main.go:141] libmachine: Using SSH client type: native
	I1105 18:54:00.226364   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:54:00.226375   55563 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:54:00.331354   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:54:00.331442   55563 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:54:00.331453   55563 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:54:00.331461   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetMachineName
	I1105 18:54:00.331749   55563 buildroot.go:166] provisioning hostname "kubernetes-upgrade-906991"
	I1105 18:54:00.331769   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetMachineName
	I1105 18:54:00.331940   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:00.334504   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.334922   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:00.334950   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.335152   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:00.335307   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.335466   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.335596   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:00.335744   55563 main.go:141] libmachine: Using SSH client type: native
	I1105 18:54:00.335905   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:54:00.335917   55563 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-906991 && echo "kubernetes-upgrade-906991" | sudo tee /etc/hostname
	I1105 18:54:00.448883   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-906991
	
	I1105 18:54:00.448922   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:00.451584   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.451926   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:00.451956   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.452044   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:00.452213   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.452353   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.452497   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:00.452659   55563 main.go:141] libmachine: Using SSH client type: native
	I1105 18:54:00.452822   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:54:00.452838   55563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-906991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-906991/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-906991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:54:00.563157   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:54:00.563190   55563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:54:00.563236   55563 buildroot.go:174] setting up certificates
	I1105 18:54:00.563251   55563 provision.go:84] configureAuth start
	I1105 18:54:00.563269   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetMachineName
	I1105 18:54:00.563545   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetIP
	I1105 18:54:00.566035   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.566468   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:00.566517   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.566720   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:00.568938   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.569272   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:00.569310   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.569436   55563 provision.go:143] copyHostCerts
	I1105 18:54:00.569488   55563 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:54:00.569505   55563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:54:00.569557   55563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:54:00.569652   55563 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:54:00.569661   55563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:54:00.569680   55563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:54:00.569734   55563 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:54:00.569741   55563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:54:00.569758   55563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:54:00.569800   55563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-906991 san=[127.0.0.1 192.168.61.130 kubernetes-upgrade-906991 localhost minikube]
	I1105 18:54:00.823978   55563 provision.go:177] copyRemoteCerts
	I1105 18:54:00.824049   55563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:54:00.824094   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:00.826848   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.827292   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:00.827321   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.827491   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:00.827695   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.827873   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:00.828057   55563 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa Username:docker}
	I1105 18:54:00.909024   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:54:00.935467   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1105 18:54:00.961165   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:54:00.986633   55563 provision.go:87] duration metric: took 423.365936ms to configureAuth
	I1105 18:54:00.986663   55563 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:54:00.986870   55563 config.go:182] Loaded profile config "kubernetes-upgrade-906991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 18:54:00.986947   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:00.989682   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.990076   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:00.990100   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:00.990253   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:00.990423   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.990592   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:00.990681   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:00.990817   55563 main.go:141] libmachine: Using SSH client type: native
	I1105 18:54:00.991040   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:54:00.991059   55563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:54:01.215135   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:54:01.215170   55563 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:54:01.215182   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetURL
	I1105 18:54:01.216641   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | Using libvirt version 6000000
	I1105 18:54:01.218712   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.219242   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:01.219274   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.219421   55563 main.go:141] libmachine: Docker is up and running!
	I1105 18:54:01.219437   55563 main.go:141] libmachine: Reticulating splines...
	I1105 18:54:01.219445   55563 client.go:171] duration metric: took 21.962309471s to LocalClient.Create
	I1105 18:54:01.219469   55563 start.go:167] duration metric: took 21.962380914s to libmachine.API.Create "kubernetes-upgrade-906991"
	I1105 18:54:01.219478   55563 start.go:293] postStartSetup for "kubernetes-upgrade-906991" (driver="kvm2")
	I1105 18:54:01.219488   55563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:54:01.219504   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:54:01.219712   55563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:54:01.219741   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:01.221993   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.222259   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:01.222283   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.222438   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:01.222629   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:01.222801   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:01.222944   55563 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa Username:docker}
	I1105 18:54:01.301152   55563 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:54:01.304942   55563 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:54:01.304968   55563 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:54:01.305025   55563 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:54:01.305117   55563 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:54:01.305214   55563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:54:01.314589   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:54:01.336928   55563 start.go:296] duration metric: took 117.43724ms for postStartSetup
	I1105 18:54:01.336989   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetConfigRaw
	I1105 18:54:01.337565   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetIP
	I1105 18:54:01.340224   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.340574   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:01.340608   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.340769   55563 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/config.json ...
	I1105 18:54:01.340972   55563 start.go:128] duration metric: took 22.105124404s to createHost
	I1105 18:54:01.341010   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:01.343157   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.343452   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:01.343472   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.343629   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:01.343805   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:01.343918   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:01.344059   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:01.344187   55563 main.go:141] libmachine: Using SSH client type: native
	I1105 18:54:01.344346   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:54:01.344356   55563 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:54:01.447364   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730832841.421487537
	
	I1105 18:54:01.447386   55563 fix.go:216] guest clock: 1730832841.421487537
	I1105 18:54:01.447395   55563 fix.go:229] Guest: 2024-11-05 18:54:01.421487537 +0000 UTC Remote: 2024-11-05 18:54:01.34099602 +0000 UTC m=+40.542340157 (delta=80.491517ms)
	I1105 18:54:01.447439   55563 fix.go:200] guest clock delta is within tolerance: 80.491517ms
	I1105 18:54:01.447447   55563 start.go:83] releasing machines lock for "kubernetes-upgrade-906991", held for 22.211771895s
	I1105 18:54:01.447475   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:54:01.447793   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetIP
	I1105 18:54:01.450833   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.451261   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:01.451293   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.451457   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:54:01.452036   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:54:01.452188   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:54:01.452257   55563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:54:01.452309   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:01.452385   55563 ssh_runner.go:195] Run: cat /version.json
	I1105 18:54:01.452405   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:54:01.454923   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.455207   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.455251   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:01.455281   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.455407   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:01.455590   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:01.455637   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:01.455666   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:01.455738   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:01.455805   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:54:01.455887   55563 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa Username:docker}
	I1105 18:54:01.455951   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:54:01.456053   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:54:01.456194   55563 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa Username:docker}
	I1105 18:54:01.533324   55563 ssh_runner.go:195] Run: systemctl --version
	I1105 18:54:01.566030   55563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:54:01.723026   55563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:54:01.729241   55563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:54:01.729313   55563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:54:01.744204   55563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:54:01.744228   55563 start.go:495] detecting cgroup driver to use...
	I1105 18:54:01.744303   55563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:54:01.760557   55563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:54:01.775383   55563 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:54:01.775448   55563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:54:01.788526   55563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:54:01.802242   55563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:54:01.919757   55563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:54:02.078472   55563 docker.go:233] disabling docker service ...
	I1105 18:54:02.078549   55563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:54:02.092519   55563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:54:02.108846   55563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:54:02.223153   55563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:54:02.334640   55563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:54:02.348261   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:54:02.366720   55563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1105 18:54:02.366777   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:54:02.377719   55563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:54:02.377791   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:54:02.387945   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:54:02.398284   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:54:02.411201   55563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:54:02.421450   55563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:54:02.430620   55563 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:54:02.430673   55563 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:54:02.442490   55563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:54:02.451679   55563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:54:02.558373   55563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:54:02.659005   55563 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:54:02.659075   55563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:54:02.663701   55563 start.go:563] Will wait 60s for crictl version
	I1105 18:54:02.663761   55563 ssh_runner.go:195] Run: which crictl
	I1105 18:54:02.667324   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:54:02.705620   55563 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:54:02.705720   55563 ssh_runner.go:195] Run: crio --version
	I1105 18:54:02.733585   55563 ssh_runner.go:195] Run: crio --version
	I1105 18:54:02.763449   55563 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1105 18:54:02.764837   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetIP
	I1105 18:54:02.769941   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:02.770521   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:53:53 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:54:02.770552   55563 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:54:02.770814   55563 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 18:54:02.775340   55563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:54:02.789527   55563 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:54:02.789652   55563 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 18:54:02.789714   55563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:54:02.821716   55563 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 18:54:02.821783   55563 ssh_runner.go:195] Run: which lz4
	I1105 18:54:02.825994   55563 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 18:54:02.830168   55563 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 18:54:02.830202   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1105 18:54:04.372706   55563 crio.go:462] duration metric: took 1.546752468s to copy over tarball
	I1105 18:54:04.372784   55563 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 18:54:06.924591   55563 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551774708s)
	I1105 18:54:06.924622   55563 crio.go:469] duration metric: took 2.551887414s to extract the tarball
	I1105 18:54:06.924632   55563 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 18:54:06.967202   55563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:54:07.010374   55563 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 18:54:07.010412   55563 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 18:54:07.010499   55563 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 18:54:07.010513   55563 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 18:54:07.010534   55563 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1105 18:54:07.010516   55563 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 18:54:07.010560   55563 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1105 18:54:07.010578   55563 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1105 18:54:07.010490   55563 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 18:54:07.010493   55563 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:54:07.012249   55563 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1105 18:54:07.012261   55563 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 18:54:07.012272   55563 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1105 18:54:07.012293   55563 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:54:07.012305   55563 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 18:54:07.012340   55563 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 18:54:07.012249   55563 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 18:54:07.012376   55563 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1105 18:54:07.233967   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1105 18:54:07.261564   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1105 18:54:07.261871   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 18:54:07.270130   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1105 18:54:07.273175   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1105 18:54:07.284033   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1105 18:54:07.286292   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1105 18:54:07.312744   55563 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1105 18:54:07.312790   55563 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1105 18:54:07.312832   55563 ssh_runner.go:195] Run: which crictl
	I1105 18:54:07.412086   55563 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1105 18:54:07.412116   55563 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1105 18:54:07.412130   55563 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 18:54:07.412150   55563 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 18:54:07.412177   55563 ssh_runner.go:195] Run: which crictl
	I1105 18:54:07.412193   55563 ssh_runner.go:195] Run: which crictl
	I1105 18:54:07.416643   55563 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1105 18:54:07.416698   55563 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1105 18:54:07.416741   55563 ssh_runner.go:195] Run: which crictl
	I1105 18:54:07.417881   55563 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1105 18:54:07.417923   55563 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 18:54:07.417951   55563 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1105 18:54:07.417991   55563 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1105 18:54:07.417998   55563 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1105 18:54:07.417958   55563 ssh_runner.go:195] Run: which crictl
	I1105 18:54:07.418031   55563 ssh_runner.go:195] Run: which crictl
	I1105 18:54:07.418022   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 18:54:07.418021   55563 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 18:54:07.418082   55563 ssh_runner.go:195] Run: which crictl
	I1105 18:54:07.422144   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 18:54:07.422210   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 18:54:07.433704   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 18:54:07.433755   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 18:54:07.433805   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 18:54:07.433834   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 18:54:07.497281   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 18:54:07.580384   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 18:54:07.615843   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 18:54:07.615877   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 18:54:07.615855   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 18:54:07.615938   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 18:54:07.615952   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 18:54:07.616017   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 18:54:07.707742   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 18:54:07.765177   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 18:54:07.765243   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 18:54:07.765179   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1105 18:54:07.765309   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 18:54:07.765418   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 18:54:07.765458   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 18:54:07.795829   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1105 18:54:07.883649   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1105 18:54:07.883674   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1105 18:54:07.883649   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1105 18:54:07.883697   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1105 18:54:07.883717   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1105 18:54:08.247787   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:54:08.393680   55563 cache_images.go:92] duration metric: took 1.383247187s to LoadCachedImages
	W1105 18:54:08.393779   55563 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1105 18:54:08.393797   55563 kubeadm.go:934] updating node { 192.168.61.130 8443 v1.20.0 crio true true} ...
	I1105 18:54:08.393906   55563 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-906991 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:54:08.393989   55563 ssh_runner.go:195] Run: crio config
	I1105 18:54:08.448256   55563 cni.go:84] Creating CNI manager for ""
	I1105 18:54:08.448287   55563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:54:08.448301   55563 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:54:08.448322   55563 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.130 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-906991 NodeName:kubernetes-upgrade-906991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1105 18:54:08.448520   55563 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-906991"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:54:08.448595   55563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1105 18:54:08.458888   55563 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:54:08.458959   55563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 18:54:08.468492   55563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1105 18:54:08.487122   55563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:54:08.505692   55563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1105 18:54:08.524862   55563 ssh_runner.go:195] Run: grep 192.168.61.130	control-plane.minikube.internal$ /etc/hosts
	I1105 18:54:08.528605   55563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:54:08.540904   55563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:54:08.665485   55563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:54:08.685635   55563 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991 for IP: 192.168.61.130
	I1105 18:54:08.685661   55563 certs.go:194] generating shared ca certs ...
	I1105 18:54:08.685683   55563 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:54:08.685850   55563 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:54:08.685913   55563 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:54:08.685927   55563 certs.go:256] generating profile certs ...
	I1105 18:54:08.686007   55563 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/client.key
	I1105 18:54:08.686029   55563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/client.crt with IP's: []
	I1105 18:54:08.805302   55563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/client.crt ...
	I1105 18:54:08.805332   55563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/client.crt: {Name:mk3d342be9da6da54138b05514cecb424c2f0ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:54:08.805530   55563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/client.key ...
	I1105 18:54:08.805554   55563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/client.key: {Name:mk234193c187c33af0191ac5b6ad2c43673302a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:54:08.805670   55563 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.key.30533d61
	I1105 18:54:08.805698   55563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.crt.30533d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.130]
	I1105 18:54:09.083130   55563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.crt.30533d61 ...
	I1105 18:54:09.083167   55563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.crt.30533d61: {Name:mkd5c625b1f18f010202eddc4391f684ec5a6190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:54:09.083356   55563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.key.30533d61 ...
	I1105 18:54:09.083374   55563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.key.30533d61: {Name:mk98c13573b00b684364f3e9c77e1d11d764f5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:54:09.083484   55563 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.crt.30533d61 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.crt
	I1105 18:54:09.083596   55563 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.key.30533d61 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.key
	I1105 18:54:09.083682   55563 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.key
	I1105 18:54:09.083711   55563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.crt with IP's: []
	I1105 18:54:09.554097   55563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.crt ...
	I1105 18:54:09.554135   55563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.crt: {Name:mkf628e9b6b066e44f3da06cea355729e440155b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:54:09.554333   55563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.key ...
	I1105 18:54:09.554359   55563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.key: {Name:mk1d5d1da11f73556e478af5bd0cf03e6f54cf6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:54:09.554607   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:54:09.554667   55563 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:54:09.554683   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:54:09.554714   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:54:09.554750   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:54:09.554773   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:54:09.554810   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:54:09.555551   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:54:09.584361   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:54:09.607526   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:54:09.634141   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:54:09.665400   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 18:54:09.692064   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:54:09.728429   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:54:09.758554   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 18:54:09.783554   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:54:09.808343   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:54:09.839723   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:54:09.863171   55563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:54:09.880080   55563 ssh_runner.go:195] Run: openssl version
	I1105 18:54:09.886207   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:54:09.898791   55563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:54:09.903345   55563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:54:09.903408   55563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:54:09.910274   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:54:09.921544   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:54:09.932366   55563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:54:09.937173   55563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:54:09.937227   55563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:54:09.942964   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:54:09.958488   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:54:09.971123   55563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:54:09.975767   55563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:54:09.975830   55563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:54:09.981537   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:54:09.992867   55563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:54:09.997206   55563 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:54:09.997282   55563 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:54:09.997392   55563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:54:09.997456   55563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:54:10.036515   55563 cri.go:89] found id: ""
	I1105 18:54:10.036575   55563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:54:10.046842   55563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 18:54:10.057163   55563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:54:10.067748   55563 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:54:10.067771   55563 kubeadm.go:157] found existing configuration files:
	
	I1105 18:54:10.067831   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:54:10.077236   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:54:10.077295   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:54:10.086868   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:54:10.096328   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:54:10.096408   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:54:10.106950   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:54:10.116747   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:54:10.116825   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:54:10.127422   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:54:10.138159   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:54:10.138232   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:54:10.148981   55563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 18:54:10.282257   55563 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 18:54:10.282333   55563 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 18:54:10.430811   55563 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 18:54:10.431018   55563 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 18:54:10.431164   55563 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 18:54:10.626232   55563 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 18:54:10.777099   55563 out.go:235]   - Generating certificates and keys ...
	I1105 18:54:10.777215   55563 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 18:54:10.777295   55563 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 18:54:10.983136   55563 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 18:54:11.092333   55563 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 18:54:11.310460   55563 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 18:54:11.446765   55563 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 18:54:11.519579   55563 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 18:54:11.519937   55563 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-906991 localhost] and IPs [192.168.61.130 127.0.0.1 ::1]
	I1105 18:54:11.952601   55563 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 18:54:11.952897   55563 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-906991 localhost] and IPs [192.168.61.130 127.0.0.1 ::1]
	I1105 18:54:12.141913   55563 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 18:54:12.322805   55563 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 18:54:12.670844   55563 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 18:54:12.671208   55563 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 18:54:12.800441   55563 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 18:54:12.911515   55563 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 18:54:13.116393   55563 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 18:54:13.305272   55563 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 18:54:13.331227   55563 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 18:54:13.332476   55563 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 18:54:13.332532   55563 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 18:54:13.452650   55563 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 18:54:13.454411   55563 out.go:235]   - Booting up control plane ...
	I1105 18:54:13.454539   55563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 18:54:13.459101   55563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 18:54:13.460042   55563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 18:54:13.468204   55563 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 18:54:13.473869   55563 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 18:54:53.467361   55563 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 18:54:53.468124   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:54:53.468398   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:54:58.468910   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:54:58.469207   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:55:08.468354   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:55:08.468545   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:55:28.468329   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:55:28.468660   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:56:08.470198   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:56:08.470459   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:56:08.470477   55563 kubeadm.go:310] 
	I1105 18:56:08.470544   55563 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 18:56:08.470599   55563 kubeadm.go:310] 		timed out waiting for the condition
	I1105 18:56:08.470608   55563 kubeadm.go:310] 
	I1105 18:56:08.470660   55563 kubeadm.go:310] 	This error is likely caused by:
	I1105 18:56:08.470700   55563 kubeadm.go:310] 		- The kubelet is not running
	I1105 18:56:08.470836   55563 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 18:56:08.470846   55563 kubeadm.go:310] 
	I1105 18:56:08.471021   55563 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 18:56:08.471068   55563 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 18:56:08.471116   55563 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 18:56:08.471122   55563 kubeadm.go:310] 
	I1105 18:56:08.471248   55563 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 18:56:08.471358   55563 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 18:56:08.471365   55563 kubeadm.go:310] 
	I1105 18:56:08.471489   55563 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 18:56:08.471595   55563 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 18:56:08.471686   55563 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 18:56:08.471772   55563 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 18:56:08.471781   55563 kubeadm.go:310] 
	I1105 18:56:08.472398   55563 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 18:56:08.472517   55563 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 18:56:08.472667   55563 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1105 18:56:08.472766   55563 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-906991 localhost] and IPs [192.168.61.130 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-906991 localhost] and IPs [192.168.61.130 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-906991 localhost] and IPs [192.168.61.130 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-906991 localhost] and IPs [192.168.61.130 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1105 18:56:08.472809   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 18:56:10.296820   55563 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.82397952s)
	I1105 18:56:10.296911   55563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:56:10.310143   55563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:56:10.319815   55563 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:56:10.319843   55563 kubeadm.go:157] found existing configuration files:
	
	I1105 18:56:10.319895   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:56:10.329241   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:56:10.329318   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:56:10.338942   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:56:10.348077   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:56:10.348143   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:56:10.357583   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:56:10.370070   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:56:10.370152   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:56:10.380106   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:56:10.389352   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:56:10.389417   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:56:10.401639   55563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 18:56:10.639786   55563 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 18:58:06.598895   55563 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 18:58:06.599041   55563 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1105 18:58:06.600823   55563 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 18:58:06.600903   55563 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 18:58:06.600998   55563 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 18:58:06.601136   55563 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 18:58:06.601274   55563 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 18:58:06.601355   55563 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 18:58:06.603095   55563 out.go:235]   - Generating certificates and keys ...
	I1105 18:58:06.603190   55563 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 18:58:06.603289   55563 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 18:58:06.603361   55563 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 18:58:06.603430   55563 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 18:58:06.603509   55563 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 18:58:06.603565   55563 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 18:58:06.603644   55563 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 18:58:06.603742   55563 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 18:58:06.603847   55563 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 18:58:06.603963   55563 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 18:58:06.604031   55563 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 18:58:06.604114   55563 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 18:58:06.604186   55563 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 18:58:06.604267   55563 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 18:58:06.604355   55563 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 18:58:06.604403   55563 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 18:58:06.604486   55563 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 18:58:06.604551   55563 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 18:58:06.604583   55563 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 18:58:06.604639   55563 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 18:58:06.605977   55563 out.go:235]   - Booting up control plane ...
	I1105 18:58:06.606063   55563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 18:58:06.606137   55563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 18:58:06.606224   55563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 18:58:06.606339   55563 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 18:58:06.606514   55563 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 18:58:06.606562   55563 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 18:58:06.606636   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:58:06.606800   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:58:06.606901   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:58:06.607128   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:58:06.607225   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:58:06.607392   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:58:06.607460   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:58:06.607625   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:58:06.607729   55563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 18:58:06.607908   55563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 18:58:06.607916   55563 kubeadm.go:310] 
	I1105 18:58:06.607961   55563 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 18:58:06.607995   55563 kubeadm.go:310] 		timed out waiting for the condition
	I1105 18:58:06.608001   55563 kubeadm.go:310] 
	I1105 18:58:06.608028   55563 kubeadm.go:310] 	This error is likely caused by:
	I1105 18:58:06.608060   55563 kubeadm.go:310] 		- The kubelet is not running
	I1105 18:58:06.608150   55563 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 18:58:06.608158   55563 kubeadm.go:310] 
	I1105 18:58:06.608247   55563 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 18:58:06.608280   55563 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 18:58:06.608315   55563 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 18:58:06.608321   55563 kubeadm.go:310] 
	I1105 18:58:06.608410   55563 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 18:58:06.608486   55563 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 18:58:06.608495   55563 kubeadm.go:310] 
	I1105 18:58:06.608584   55563 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 18:58:06.608661   55563 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 18:58:06.608728   55563 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 18:58:06.608791   55563 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 18:58:06.608847   55563 kubeadm.go:394] duration metric: took 3m56.611571944s to StartCluster
	I1105 18:58:06.608858   55563 kubeadm.go:310] 
	I1105 18:58:06.608883   55563 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 18:58:06.608934   55563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 18:58:06.649941   55563 cri.go:89] found id: ""
	I1105 18:58:06.649970   55563 logs.go:282] 0 containers: []
	W1105 18:58:06.649978   55563 logs.go:284] No container was found matching "kube-apiserver"
	I1105 18:58:06.649983   55563 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 18:58:06.650036   55563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 18:58:06.682054   55563 cri.go:89] found id: ""
	I1105 18:58:06.682083   55563 logs.go:282] 0 containers: []
	W1105 18:58:06.682094   55563 logs.go:284] No container was found matching "etcd"
	I1105 18:58:06.682102   55563 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 18:58:06.682162   55563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 18:58:06.714137   55563 cri.go:89] found id: ""
	I1105 18:58:06.714165   55563 logs.go:282] 0 containers: []
	W1105 18:58:06.714177   55563 logs.go:284] No container was found matching "coredns"
	I1105 18:58:06.714185   55563 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 18:58:06.714246   55563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 18:58:06.773410   55563 cri.go:89] found id: ""
	I1105 18:58:06.773439   55563 logs.go:282] 0 containers: []
	W1105 18:58:06.773450   55563 logs.go:284] No container was found matching "kube-scheduler"
	I1105 18:58:06.773457   55563 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 18:58:06.773526   55563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 18:58:06.805564   55563 cri.go:89] found id: ""
	I1105 18:58:06.805588   55563 logs.go:282] 0 containers: []
	W1105 18:58:06.805595   55563 logs.go:284] No container was found matching "kube-proxy"
	I1105 18:58:06.805601   55563 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 18:58:06.805654   55563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 18:58:06.839339   55563 cri.go:89] found id: ""
	I1105 18:58:06.839362   55563 logs.go:282] 0 containers: []
	W1105 18:58:06.839369   55563 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 18:58:06.839375   55563 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 18:58:06.839424   55563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 18:58:06.871657   55563 cri.go:89] found id: ""
	I1105 18:58:06.871688   55563 logs.go:282] 0 containers: []
	W1105 18:58:06.871699   55563 logs.go:284] No container was found matching "kindnet"
	I1105 18:58:06.871714   55563 logs.go:123] Gathering logs for container status ...
	I1105 18:58:06.871730   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 18:58:06.908906   55563 logs.go:123] Gathering logs for kubelet ...
	I1105 18:58:06.908939   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 18:58:06.965422   55563 logs.go:123] Gathering logs for dmesg ...
	I1105 18:58:06.965456   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 18:58:06.979234   55563 logs.go:123] Gathering logs for describe nodes ...
	I1105 18:58:06.979258   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 18:58:07.107617   55563 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 18:58:07.107636   55563 logs.go:123] Gathering logs for CRI-O ...
	I1105 18:58:07.107648   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1105 18:58:07.216605   55563 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1105 18:58:07.216679   55563 out.go:270] * 
	* 
	W1105 18:58:07.216749   55563 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 18:58:07.216771   55563 out.go:270] * 
	* 
	W1105 18:58:07.218085   55563 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 18:58:07.221744   55563 out.go:201] 
	W1105 18:58:07.223300   55563 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 18:58:07.223360   55563 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1105 18:58:07.223388   55563 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1105 18:58:07.226195   55563 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-906991 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-906991
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-906991: (6.333337224s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-906991 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-906991 status --format={{.Host}}: exit status 7 (69.412438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-906991 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-906991 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.839288586s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-906991 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-906991 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-906991 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (83.556397ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-906991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-906991
	    minikube start -p kubernetes-upgrade-906991 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9069912 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-906991 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-906991 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1105 18:59:06.921835   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-906991 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.872634426s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-11-05 19:00:22.53903768 +0000 UTC m=+4756.211762365
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-906991 -n kubernetes-upgrade-906991
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-906991 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-906991 logs -n 25: (2.117432517s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-929548 sudo find                           | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | /etc/cni -type f -exec sh -c                         |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo ip a s                         | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	| ssh     | -p calico-929548 sudo ip r s                         | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | iptables-save                                        |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo iptables                       | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo cat                            | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo cat                            | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo cat                            | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo docker                         | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo cat                            | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo cat                            | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo cat                            | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo cat                            | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC | 05 Nov 24 19:00 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p calico-929548 sudo                                | calico-929548 | jenkins | v1.34.0 | 05 Nov 24 19:00 UTC |                     |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:58:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:58:50.707547   62635 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:58:50.707670   62635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:58:50.707678   62635 out.go:358] Setting ErrFile to fd 2...
	I1105 18:58:50.707684   62635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:58:50.707937   62635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:58:50.708563   62635 out.go:352] Setting JSON to false
	I1105 18:58:50.709639   62635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6073,"bootTime":1730827058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:58:50.709727   62635 start.go:139] virtualization: kvm guest
	I1105 18:58:50.711777   62635 out.go:177] * [kubernetes-upgrade-906991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:58:50.713188   62635 notify.go:220] Checking for updates...
	I1105 18:58:50.713224   62635 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:58:50.714532   62635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:58:50.716026   62635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:58:50.717303   62635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:58:50.718557   62635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:58:50.719637   62635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:58:50.721057   62635 config.go:182] Loaded profile config "kubernetes-upgrade-906991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:58:50.721449   62635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:58:50.721513   62635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:58:50.736574   62635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1105 18:58:50.737032   62635 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:58:50.737701   62635 main.go:141] libmachine: Using API Version  1
	I1105 18:58:50.737727   62635 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:58:50.738041   62635 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:58:50.738221   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:58:50.738454   62635 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:58:50.738776   62635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:58:50.738828   62635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:58:50.753596   62635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I1105 18:58:50.754106   62635 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:58:50.754639   62635 main.go:141] libmachine: Using API Version  1
	I1105 18:58:50.754660   62635 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:58:50.755020   62635 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:58:50.755203   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:58:50.794477   62635 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:58:50.795750   62635 start.go:297] selected driver: kvm2
	I1105 18:58:50.795770   62635 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:58:50.795895   62635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:58:50.796640   62635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:58:50.796754   62635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:58:50.812229   62635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:58:50.812672   62635 cni.go:84] Creating CNI manager for ""
	I1105 18:58:50.812738   62635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:58:50.812804   62635 start.go:340] cluster config:
	{Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-906991 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:58:50.812935   62635 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:58:50.814684   62635 out.go:177] * Starting "kubernetes-upgrade-906991" primary control-plane node in "kubernetes-upgrade-906991" cluster
	I1105 18:58:47.379216   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:47.379651   60910 main.go:141] libmachine: (calico-929548) DBG | unable to find current IP address of domain calico-929548 in network mk-calico-929548
	I1105 18:58:47.379675   60910 main.go:141] libmachine: (calico-929548) DBG | I1105 18:58:47.379600   62315 retry.go:31] will retry after 3.327484014s: waiting for machine to come up
	I1105 18:58:50.709001   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:50.709438   60910 main.go:141] libmachine: (calico-929548) DBG | unable to find current IP address of domain calico-929548 in network mk-calico-929548
	I1105 18:58:50.709459   60910 main.go:141] libmachine: (calico-929548) DBG | I1105 18:58:50.709401   62315 retry.go:31] will retry after 4.126308336s: waiting for machine to come up
	I1105 18:58:50.815839   62635 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:58:50.815888   62635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:58:50.815898   62635 cache.go:56] Caching tarball of preloaded images
	I1105 18:58:50.815983   62635 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:58:50.815994   62635 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:58:50.816074   62635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/config.json ...
	I1105 18:58:50.816244   62635 start.go:360] acquireMachinesLock for kubernetes-upgrade-906991: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:58:56.220163   60943 start.go:364] duration metric: took 34.289813735s to acquireMachinesLock for "custom-flannel-929548"
	I1105 18:58:56.220222   60943 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-929548 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:58:56.220333   60943 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 18:58:54.837675   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:54.838200   60910 main.go:141] libmachine: (calico-929548) Found IP for machine: 192.168.39.203
	I1105 18:58:54.838228   60910 main.go:141] libmachine: (calico-929548) Reserving static IP address...
	I1105 18:58:54.838243   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has current primary IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:54.838565   60910 main.go:141] libmachine: (calico-929548) DBG | unable to find host DHCP lease matching {name: "calico-929548", mac: "52:54:00:76:e9:b6", ip: "192.168.39.203"} in network mk-calico-929548
	I1105 18:58:54.914232   60910 main.go:141] libmachine: (calico-929548) DBG | Getting to WaitForSSH function...
	I1105 18:58:54.914268   60910 main.go:141] libmachine: (calico-929548) Reserved static IP address: 192.168.39.203
	I1105 18:58:54.914282   60910 main.go:141] libmachine: (calico-929548) Waiting for SSH to be available...
	I1105 18:58:54.916746   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:54.917153   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:54.917209   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:54.917362   60910 main.go:141] libmachine: (calico-929548) DBG | Using SSH client type: external
	I1105 18:58:54.917385   60910 main.go:141] libmachine: (calico-929548) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/calico-929548/id_rsa (-rw-------)
	I1105 18:58:54.917403   60910 main.go:141] libmachine: (calico-929548) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/calico-929548/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:58:54.917420   60910 main.go:141] libmachine: (calico-929548) DBG | About to run SSH command:
	I1105 18:58:54.917435   60910 main.go:141] libmachine: (calico-929548) DBG | exit 0
	I1105 18:58:55.038989   60910 main.go:141] libmachine: (calico-929548) DBG | SSH cmd err, output: <nil>: 
	I1105 18:58:55.039299   60910 main.go:141] libmachine: (calico-929548) KVM machine creation complete!
	I1105 18:58:55.039545   60910 main.go:141] libmachine: (calico-929548) Calling .GetConfigRaw
	I1105 18:58:55.040078   60910 main.go:141] libmachine: (calico-929548) Calling .DriverName
	I1105 18:58:55.040298   60910 main.go:141] libmachine: (calico-929548) Calling .DriverName
	I1105 18:58:55.040437   60910 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:58:55.040447   60910 main.go:141] libmachine: (calico-929548) Calling .GetState
	I1105 18:58:55.041775   60910 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:58:55.041789   60910 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:58:55.041794   60910 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:58:55.041799   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:55.044177   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.044503   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.044532   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.044638   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:55.044836   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.044952   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.045096   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:55.045256   60910 main.go:141] libmachine: Using SSH client type: native
	I1105 18:58:55.045437   60910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1105 18:58:55.045447   60910 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:58:55.146120   60910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:58:55.146143   60910 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:58:55.146151   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:55.148944   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.149341   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.149370   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.149487   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:55.149697   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.149889   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.150089   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:55.150271   60910 main.go:141] libmachine: Using SSH client type: native
	I1105 18:58:55.150455   60910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1105 18:58:55.150465   60910 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:58:55.255370   60910 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:58:55.255482   60910 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:58:55.255500   60910 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:58:55.255515   60910 main.go:141] libmachine: (calico-929548) Calling .GetMachineName
	I1105 18:58:55.255761   60910 buildroot.go:166] provisioning hostname "calico-929548"
	I1105 18:58:55.255788   60910 main.go:141] libmachine: (calico-929548) Calling .GetMachineName
	I1105 18:58:55.255966   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:55.258426   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.258750   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.258768   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.258996   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:55.259162   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.259284   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.259424   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:55.259562   60910 main.go:141] libmachine: Using SSH client type: native
	I1105 18:58:55.259772   60910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1105 18:58:55.259784   60910 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-929548 && echo "calico-929548" | sudo tee /etc/hostname
	I1105 18:58:55.376551   60910 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-929548
	
	I1105 18:58:55.376582   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:55.379468   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.379825   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.379852   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.380015   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:55.380194   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.380341   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.380493   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:55.380629   60910 main.go:141] libmachine: Using SSH client type: native
	I1105 18:58:55.380827   60910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1105 18:58:55.380850   60910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-929548' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-929548/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-929548' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:58:55.491139   60910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:58:55.491177   60910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:58:55.491203   60910 buildroot.go:174] setting up certificates
	I1105 18:58:55.491217   60910 provision.go:84] configureAuth start
	I1105 18:58:55.491227   60910 main.go:141] libmachine: (calico-929548) Calling .GetMachineName
	I1105 18:58:55.491488   60910 main.go:141] libmachine: (calico-929548) Calling .GetIP
	I1105 18:58:55.494110   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.494476   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.494505   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.494645   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:55.496933   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.497197   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.497230   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.497326   60910 provision.go:143] copyHostCerts
	I1105 18:58:55.497373   60910 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:58:55.497387   60910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:58:55.497456   60910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:58:55.497580   60910 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:58:55.497593   60910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:58:55.497623   60910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:58:55.497735   60910 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:58:55.497744   60910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:58:55.497765   60910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:58:55.497884   60910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.calico-929548 san=[127.0.0.1 192.168.39.203 calico-929548 localhost minikube]
	I1105 18:58:55.619020   60910 provision.go:177] copyRemoteCerts
	I1105 18:58:55.619072   60910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:58:55.619093   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:55.621592   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.622055   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.622083   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.622226   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:55.622397   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.622554   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:55.622669   60910 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/calico-929548/id_rsa Username:docker}
	I1105 18:58:55.700138   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:58:55.722435   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:58:55.747521   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:58:55.772138   60910 provision.go:87] duration metric: took 280.908247ms to configureAuth
	I1105 18:58:55.772170   60910 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:58:55.772398   60910 config.go:182] Loaded profile config "calico-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:58:55.772486   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:55.775322   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.775693   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.775720   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.775936   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:55.776123   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.776308   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.776467   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:55.776643   60910 main.go:141] libmachine: Using SSH client type: native
	I1105 18:58:55.776824   60910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1105 18:58:55.776839   60910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:58:55.988541   60910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:58:55.988569   60910 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:58:55.988577   60910 main.go:141] libmachine: (calico-929548) Calling .GetURL
	I1105 18:58:55.989916   60910 main.go:141] libmachine: (calico-929548) DBG | Using libvirt version 6000000
	I1105 18:58:55.992155   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.992523   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.992555   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.992767   60910 main.go:141] libmachine: Docker is up and running!
	I1105 18:58:55.992782   60910 main.go:141] libmachine: Reticulating splines...
	I1105 18:58:55.992788   60910 client.go:171] duration metric: took 21.744223357s to LocalClient.Create
	I1105 18:58:55.992810   60910 start.go:167] duration metric: took 21.7442896s to libmachine.API.Create "calico-929548"
	I1105 18:58:55.992821   60910 start.go:293] postStartSetup for "calico-929548" (driver="kvm2")
	I1105 18:58:55.992830   60910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:58:55.992845   60910 main.go:141] libmachine: (calico-929548) Calling .DriverName
	I1105 18:58:55.993058   60910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:58:55.993082   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:55.995309   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.995612   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:55.995642   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:55.995760   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:55.995943   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:55.996101   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:55.996236   60910 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/calico-929548/id_rsa Username:docker}
	I1105 18:58:56.076536   60910 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:58:56.080460   60910 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:58:56.080485   60910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:58:56.080559   60910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:58:56.080649   60910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:58:56.080766   60910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:58:56.089338   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:58:56.111714   60910 start.go:296] duration metric: took 118.880808ms for postStartSetup
	I1105 18:58:56.111770   60910 main.go:141] libmachine: (calico-929548) Calling .GetConfigRaw
	I1105 18:58:56.112410   60910 main.go:141] libmachine: (calico-929548) Calling .GetIP
	I1105 18:58:56.115042   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.115394   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:56.115421   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.115662   60910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/config.json ...
	I1105 18:58:56.115879   60910 start.go:128] duration metric: took 21.887942573s to createHost
	I1105 18:58:56.115905   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:56.118180   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.118509   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:56.118537   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.118699   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:56.118868   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:56.119035   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:56.119170   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:56.119338   60910 main.go:141] libmachine: Using SSH client type: native
	I1105 18:58:56.119501   60910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1105 18:58:56.119511   60910 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:58:56.220020   60910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833136.186156315
	
	I1105 18:58:56.220052   60910 fix.go:216] guest clock: 1730833136.186156315
	I1105 18:58:56.220059   60910 fix.go:229] Guest: 2024-11-05 18:58:56.186156315 +0000 UTC Remote: 2024-11-05 18:58:56.115892289 +0000 UTC m=+34.847886017 (delta=70.264026ms)
	I1105 18:58:56.220077   60910 fix.go:200] guest clock delta is within tolerance: 70.264026ms
	I1105 18:58:56.220082   60910 start.go:83] releasing machines lock for "calico-929548", held for 21.992310269s
	I1105 18:58:56.220111   60910 main.go:141] libmachine: (calico-929548) Calling .DriverName
	I1105 18:58:56.220391   60910 main.go:141] libmachine: (calico-929548) Calling .GetIP
	I1105 18:58:56.223458   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.223828   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:56.223864   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.224056   60910 main.go:141] libmachine: (calico-929548) Calling .DriverName
	I1105 18:58:56.224563   60910 main.go:141] libmachine: (calico-929548) Calling .DriverName
	I1105 18:58:56.224749   60910 main.go:141] libmachine: (calico-929548) Calling .DriverName
	I1105 18:58:56.224864   60910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:58:56.224920   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:56.224944   60910 ssh_runner.go:195] Run: cat /version.json
	I1105 18:58:56.224968   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:58:56.227503   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.227768   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:56.227793   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.227821   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.227964   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:56.228120   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:56.228223   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:56.228259   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:56.228277   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:56.228329   60910 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/calico-929548/id_rsa Username:docker}
	I1105 18:58:56.228424   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:58:56.228556   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:58:56.228688   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:58:56.228868   60910 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/calico-929548/id_rsa Username:docker}
	I1105 18:58:56.222328   60943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1105 18:58:56.222493   60943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:58:56.222532   60943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:58:56.242667   60943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1105 18:58:56.243102   60943 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:58:56.243688   60943 main.go:141] libmachine: Using API Version  1
	I1105 18:58:56.243707   60943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:58:56.244118   60943 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:58:56.244344   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetMachineName
	I1105 18:58:56.244517   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:58:56.244722   60943 start.go:159] libmachine.API.Create for "custom-flannel-929548" (driver="kvm2")
	I1105 18:58:56.244758   60943 client.go:168] LocalClient.Create starting
	I1105 18:58:56.244798   60943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:58:56.244840   60943 main.go:141] libmachine: Decoding PEM data...
	I1105 18:58:56.244880   60943 main.go:141] libmachine: Parsing certificate...
	I1105 18:58:56.244975   60943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:58:56.245003   60943 main.go:141] libmachine: Decoding PEM data...
	I1105 18:58:56.245021   60943 main.go:141] libmachine: Parsing certificate...
	I1105 18:58:56.245044   60943 main.go:141] libmachine: Running pre-create checks...
	I1105 18:58:56.245061   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .PreCreateCheck
	I1105 18:58:56.245372   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetConfigRaw
	I1105 18:58:56.245808   60943 main.go:141] libmachine: Creating machine...
	I1105 18:58:56.245825   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .Create
	I1105 18:58:56.245936   60943 main.go:141] libmachine: (custom-flannel-929548) Creating KVM machine...
	I1105 18:58:56.247177   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found existing default KVM network
	I1105 18:58:56.248179   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:56.248033   62722 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:71:1d:0d} reservation:<nil>}
	I1105 18:58:56.249004   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:56.248909   62722 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000121ab0}
	I1105 18:58:56.249029   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | created network xml: 
	I1105 18:58:56.249041   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | <network>
	I1105 18:58:56.249049   60943 main.go:141] libmachine: (custom-flannel-929548) DBG |   <name>mk-custom-flannel-929548</name>
	I1105 18:58:56.249059   60943 main.go:141] libmachine: (custom-flannel-929548) DBG |   <dns enable='no'/>
	I1105 18:58:56.249065   60943 main.go:141] libmachine: (custom-flannel-929548) DBG |   
	I1105 18:58:56.249078   60943 main.go:141] libmachine: (custom-flannel-929548) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1105 18:58:56.249086   60943 main.go:141] libmachine: (custom-flannel-929548) DBG |     <dhcp>
	I1105 18:58:56.249095   60943 main.go:141] libmachine: (custom-flannel-929548) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1105 18:58:56.249107   60943 main.go:141] libmachine: (custom-flannel-929548) DBG |     </dhcp>
	I1105 18:58:56.249119   60943 main.go:141] libmachine: (custom-flannel-929548) DBG |   </ip>
	I1105 18:58:56.249129   60943 main.go:141] libmachine: (custom-flannel-929548) DBG |   
	I1105 18:58:56.249137   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | </network>
	I1105 18:58:56.249146   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | 
	I1105 18:58:56.254404   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | trying to create private KVM network mk-custom-flannel-929548 192.168.50.0/24...
	I1105 18:58:56.326559   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | private KVM network mk-custom-flannel-929548 192.168.50.0/24 created
	I1105 18:58:56.326594   60943 main.go:141] libmachine: (custom-flannel-929548) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548 ...
	I1105 18:58:56.326608   60943 main.go:141] libmachine: (custom-flannel-929548) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:58:56.326621   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:56.326531   62722 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:58:56.326678   60943 main.go:141] libmachine: (custom-flannel-929548) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:58:56.583734   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:56.583637   62722 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/id_rsa...
	I1105 18:58:56.662106   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:56.661972   62722 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/custom-flannel-929548.rawdisk...
	I1105 18:58:56.662144   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Writing magic tar header
	I1105 18:58:56.662217   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Writing SSH key tar header
	I1105 18:58:56.662251   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:56.662087   62722 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548 ...
	I1105 18:58:56.662270   60943 main.go:141] libmachine: (custom-flannel-929548) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548 (perms=drwx------)
	I1105 18:58:56.662282   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548
	I1105 18:58:56.662289   60943 main.go:141] libmachine: (custom-flannel-929548) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:58:56.662303   60943 main.go:141] libmachine: (custom-flannel-929548) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:58:56.662311   60943 main.go:141] libmachine: (custom-flannel-929548) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:58:56.662321   60943 main.go:141] libmachine: (custom-flannel-929548) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:58:56.662330   60943 main.go:141] libmachine: (custom-flannel-929548) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:58:56.662344   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:58:56.662355   60943 main.go:141] libmachine: (custom-flannel-929548) Creating domain...
	I1105 18:58:56.662365   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:58:56.662379   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:58:56.662390   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:58:56.662398   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:58:56.662404   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Checking permissions on dir: /home
	I1105 18:58:56.662417   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Skipping /home - not owner
	I1105 18:58:56.663487   60943 main.go:141] libmachine: (custom-flannel-929548) define libvirt domain using xml: 
	I1105 18:58:56.663506   60943 main.go:141] libmachine: (custom-flannel-929548) <domain type='kvm'>
	I1105 18:58:56.663517   60943 main.go:141] libmachine: (custom-flannel-929548)   <name>custom-flannel-929548</name>
	I1105 18:58:56.663525   60943 main.go:141] libmachine: (custom-flannel-929548)   <memory unit='MiB'>3072</memory>
	I1105 18:58:56.663534   60943 main.go:141] libmachine: (custom-flannel-929548)   <vcpu>2</vcpu>
	I1105 18:58:56.663543   60943 main.go:141] libmachine: (custom-flannel-929548)   <features>
	I1105 18:58:56.663552   60943 main.go:141] libmachine: (custom-flannel-929548)     <acpi/>
	I1105 18:58:56.663562   60943 main.go:141] libmachine: (custom-flannel-929548)     <apic/>
	I1105 18:58:56.663580   60943 main.go:141] libmachine: (custom-flannel-929548)     <pae/>
	I1105 18:58:56.663595   60943 main.go:141] libmachine: (custom-flannel-929548)     
	I1105 18:58:56.663604   60943 main.go:141] libmachine: (custom-flannel-929548)   </features>
	I1105 18:58:56.663618   60943 main.go:141] libmachine: (custom-flannel-929548)   <cpu mode='host-passthrough'>
	I1105 18:58:56.663629   60943 main.go:141] libmachine: (custom-flannel-929548)   
	I1105 18:58:56.663636   60943 main.go:141] libmachine: (custom-flannel-929548)   </cpu>
	I1105 18:58:56.663646   60943 main.go:141] libmachine: (custom-flannel-929548)   <os>
	I1105 18:58:56.663657   60943 main.go:141] libmachine: (custom-flannel-929548)     <type>hvm</type>
	I1105 18:58:56.663678   60943 main.go:141] libmachine: (custom-flannel-929548)     <boot dev='cdrom'/>
	I1105 18:58:56.663689   60943 main.go:141] libmachine: (custom-flannel-929548)     <boot dev='hd'/>
	I1105 18:58:56.663713   60943 main.go:141] libmachine: (custom-flannel-929548)     <bootmenu enable='no'/>
	I1105 18:58:56.663735   60943 main.go:141] libmachine: (custom-flannel-929548)   </os>
	I1105 18:58:56.663746   60943 main.go:141] libmachine: (custom-flannel-929548)   <devices>
	I1105 18:58:56.663761   60943 main.go:141] libmachine: (custom-flannel-929548)     <disk type='file' device='cdrom'>
	I1105 18:58:56.663778   60943 main.go:141] libmachine: (custom-flannel-929548)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/boot2docker.iso'/>
	I1105 18:58:56.663791   60943 main.go:141] libmachine: (custom-flannel-929548)       <target dev='hdc' bus='scsi'/>
	I1105 18:58:56.663804   60943 main.go:141] libmachine: (custom-flannel-929548)       <readonly/>
	I1105 18:58:56.663829   60943 main.go:141] libmachine: (custom-flannel-929548)     </disk>
	I1105 18:58:56.663844   60943 main.go:141] libmachine: (custom-flannel-929548)     <disk type='file' device='disk'>
	I1105 18:58:56.663857   60943 main.go:141] libmachine: (custom-flannel-929548)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:58:56.663879   60943 main.go:141] libmachine: (custom-flannel-929548)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/custom-flannel-929548.rawdisk'/>
	I1105 18:58:56.663890   60943 main.go:141] libmachine: (custom-flannel-929548)       <target dev='hda' bus='virtio'/>
	I1105 18:58:56.663900   60943 main.go:141] libmachine: (custom-flannel-929548)     </disk>
	I1105 18:58:56.663912   60943 main.go:141] libmachine: (custom-flannel-929548)     <interface type='network'>
	I1105 18:58:56.663923   60943 main.go:141] libmachine: (custom-flannel-929548)       <source network='mk-custom-flannel-929548'/>
	I1105 18:58:56.663934   60943 main.go:141] libmachine: (custom-flannel-929548)       <model type='virtio'/>
	I1105 18:58:56.663945   60943 main.go:141] libmachine: (custom-flannel-929548)     </interface>
	I1105 18:58:56.663956   60943 main.go:141] libmachine: (custom-flannel-929548)     <interface type='network'>
	I1105 18:58:56.663967   60943 main.go:141] libmachine: (custom-flannel-929548)       <source network='default'/>
	I1105 18:58:56.663977   60943 main.go:141] libmachine: (custom-flannel-929548)       <model type='virtio'/>
	I1105 18:58:56.664006   60943 main.go:141] libmachine: (custom-flannel-929548)     </interface>
	I1105 18:58:56.664024   60943 main.go:141] libmachine: (custom-flannel-929548)     <serial type='pty'>
	I1105 18:58:56.664033   60943 main.go:141] libmachine: (custom-flannel-929548)       <target port='0'/>
	I1105 18:58:56.664046   60943 main.go:141] libmachine: (custom-flannel-929548)     </serial>
	I1105 18:58:56.664059   60943 main.go:141] libmachine: (custom-flannel-929548)     <console type='pty'>
	I1105 18:58:56.664070   60943 main.go:141] libmachine: (custom-flannel-929548)       <target type='serial' port='0'/>
	I1105 18:58:56.664082   60943 main.go:141] libmachine: (custom-flannel-929548)     </console>
	I1105 18:58:56.664092   60943 main.go:141] libmachine: (custom-flannel-929548)     <rng model='virtio'>
	I1105 18:58:56.664104   60943 main.go:141] libmachine: (custom-flannel-929548)       <backend model='random'>/dev/random</backend>
	I1105 18:58:56.664112   60943 main.go:141] libmachine: (custom-flannel-929548)     </rng>
	I1105 18:58:56.664119   60943 main.go:141] libmachine: (custom-flannel-929548)     
	I1105 18:58:56.664132   60943 main.go:141] libmachine: (custom-flannel-929548)     
	I1105 18:58:56.664144   60943 main.go:141] libmachine: (custom-flannel-929548)   </devices>
	I1105 18:58:56.664154   60943 main.go:141] libmachine: (custom-flannel-929548) </domain>
	I1105 18:58:56.664166   60943 main.go:141] libmachine: (custom-flannel-929548) 
	I1105 18:58:56.668085   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:36:c6:e0 in network default
	I1105 18:58:56.668572   60943 main.go:141] libmachine: (custom-flannel-929548) Ensuring networks are active...
	I1105 18:58:56.668598   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:58:56.669244   60943 main.go:141] libmachine: (custom-flannel-929548) Ensuring network default is active
	I1105 18:58:56.669550   60943 main.go:141] libmachine: (custom-flannel-929548) Ensuring network mk-custom-flannel-929548 is active
	I1105 18:58:56.670024   60943 main.go:141] libmachine: (custom-flannel-929548) Getting domain xml...
	I1105 18:58:56.670853   60943 main.go:141] libmachine: (custom-flannel-929548) Creating domain...
	I1105 18:58:56.307550   60910 ssh_runner.go:195] Run: systemctl --version
	I1105 18:58:56.339982   60910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:58:56.495868   60910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:58:56.502130   60910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:58:56.502201   60910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:58:56.518481   60910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:58:56.518503   60910 start.go:495] detecting cgroup driver to use...
	I1105 18:58:56.518568   60910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:58:56.535086   60910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:58:56.548076   60910 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:58:56.548154   60910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:58:56.561104   60910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:58:56.574013   60910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:58:56.687086   60910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:58:56.822532   60910 docker.go:233] disabling docker service ...
	I1105 18:58:56.822600   60910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:58:56.836469   60910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:58:56.848720   60910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:58:56.985804   60910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:58:57.103845   60910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:58:57.117348   60910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:58:57.134958   60910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:58:57.135089   60910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:58:57.146403   60910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:58:57.146459   60910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:58:57.159655   60910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:58:57.172508   60910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:58:57.182997   60910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:58:57.192933   60910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:58:57.202496   60910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:58:57.220661   60910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:58:57.232980   60910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:58:57.244004   60910 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:58:57.244075   60910 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:58:57.256297   60910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:58:57.265346   60910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:58:57.398553   60910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:58:57.500378   60910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:58:57.500462   60910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:58:57.504953   60910 start.go:563] Will wait 60s for crictl version
	I1105 18:58:57.505008   60910 ssh_runner.go:195] Run: which crictl
	I1105 18:58:57.508754   60910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:58:57.547142   60910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:58:57.547235   60910 ssh_runner.go:195] Run: crio --version
	I1105 18:58:57.575364   60910 ssh_runner.go:195] Run: crio --version
	I1105 18:58:57.603749   60910 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:58:57.604901   60910 main.go:141] libmachine: (calico-929548) Calling .GetIP
	I1105 18:58:57.607875   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:57.608279   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:58:57.608308   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:58:57.608497   60910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:58:57.612454   60910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:58:57.624835   60910 kubeadm.go:883] updating cluster {Name:calico-929548 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:calico-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:58:57.624947   60910 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:58:57.625010   60910 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:58:57.658738   60910 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 18:58:57.658794   60910 ssh_runner.go:195] Run: which lz4
	I1105 18:58:57.662873   60910 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 18:58:57.666771   60910 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 18:58:57.666799   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 18:58:58.911356   60910 crio.go:462] duration metric: took 1.24851923s to copy over tarball
	I1105 18:58:58.911433   60910 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 18:59:01.224279   60910 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.31281571s)
	I1105 18:59:01.224309   60910 crio.go:469] duration metric: took 2.312927438s to extract the tarball
	I1105 18:59:01.224318   60910 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 18:59:01.260617   60910 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:59:01.300493   60910 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:59:01.300515   60910 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:59:01.300522   60910 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.2 crio true true} ...
	I1105 18:59:01.300631   60910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-929548 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:calico-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1105 18:59:01.300712   60910 ssh_runner.go:195] Run: crio config
	I1105 18:58:57.999968   60943 main.go:141] libmachine: (custom-flannel-929548) Waiting to get IP...
	I1105 18:58:58.001051   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:58:58.001572   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:58:58.001600   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:58.001559   62722 retry.go:31] will retry after 254.633799ms: waiting for machine to come up
	I1105 18:58:58.258256   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:58:58.259111   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:58:58.259139   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:58.259013   62722 retry.go:31] will retry after 268.154017ms: waiting for machine to come up
	I1105 18:58:58.528608   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:58:58.529180   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:58:58.529209   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:58.529110   62722 retry.go:31] will retry after 355.810933ms: waiting for machine to come up
	I1105 18:58:58.886395   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:58:58.886858   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:58:58.886887   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:58.886835   62722 retry.go:31] will retry after 460.544647ms: waiting for machine to come up
	I1105 18:58:59.349202   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:58:59.349692   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:58:59.349722   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:58:59.349629   62722 retry.go:31] will retry after 681.62308ms: waiting for machine to come up
	I1105 18:59:00.033352   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:00.033842   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:59:00.033867   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:59:00.033816   62722 retry.go:31] will retry after 599.104559ms: waiting for machine to come up
	I1105 18:59:00.634568   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:00.635157   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:59:00.635183   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:59:00.635109   62722 retry.go:31] will retry after 1.089890604s: waiting for machine to come up
	I1105 18:59:01.726500   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:01.727057   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:59:01.727089   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:59:01.727009   62722 retry.go:31] will retry after 1.450554105s: waiting for machine to come up
	I1105 18:59:01.345765   60910 cni.go:84] Creating CNI manager for "calico"
	I1105 18:59:01.345796   60910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:59:01.345821   60910 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-929548 NodeName:calico-929548 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:59:01.345943   60910 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-929548"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.203"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:59:01.346008   60910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:59:01.358187   60910 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:59:01.358258   60910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 18:59:01.370847   60910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1105 18:59:01.391713   60910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:59:01.411231   60910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1105 18:59:01.430407   60910 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I1105 18:59:01.434816   60910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:59:01.449921   60910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:59:01.566425   60910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:59:01.584018   60910 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548 for IP: 192.168.39.203
	I1105 18:59:01.584049   60910 certs.go:194] generating shared ca certs ...
	I1105 18:59:01.584069   60910 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:01.584266   60910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:59:01.584328   60910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:59:01.584343   60910 certs.go:256] generating profile certs ...
	I1105 18:59:01.584423   60910 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.key
	I1105 18:59:01.584443   60910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt with IP's: []
	I1105 18:59:01.804969   60910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt ...
	I1105 18:59:01.804998   60910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: {Name:mk70b53bc312a81f6ac9ce82655318cab2b685d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:01.805168   60910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.key ...
	I1105 18:59:01.805180   60910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.key: {Name:mkdc6929b12d91af5ba27c8adf39633c33a63d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:01.805252   60910 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.key.7c4f5eb9
	I1105 18:59:01.805269   60910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.crt.7c4f5eb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203]
	I1105 18:59:01.845926   60910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.crt.7c4f5eb9 ...
	I1105 18:59:01.845957   60910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.crt.7c4f5eb9: {Name:mk305f86a3231e49162828b5395ac386fcd72b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:01.846104   60910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.key.7c4f5eb9 ...
	I1105 18:59:01.846116   60910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.key.7c4f5eb9: {Name:mkfa631aa8eb905ff25c0d3a65488cb6e0c19b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:01.846183   60910 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.crt.7c4f5eb9 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.crt
	I1105 18:59:01.846269   60910 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.key.7c4f5eb9 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.key
	I1105 18:59:01.846327   60910 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/proxy-client.key
	I1105 18:59:01.846338   60910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/proxy-client.crt with IP's: []
	I1105 18:59:02.024682   60910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/proxy-client.crt ...
	I1105 18:59:02.024710   60910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/proxy-client.crt: {Name:mkc2a341cf9a4067bbc7ca0d667bdcd5e7c0082c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:02.024870   60910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/proxy-client.key ...
	I1105 18:59:02.024881   60910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/proxy-client.key: {Name:mk0a7293dfe68fe3660802e37087cfb5e32a1245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:02.025043   60910 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:59:02.025076   60910 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:59:02.025086   60910 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:59:02.025106   60910 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:59:02.025148   60910 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:59:02.025175   60910 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:59:02.025224   60910 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:59:02.025770   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:59:02.050431   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:59:02.072459   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:59:02.096859   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:59:02.119952   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1105 18:59:02.142445   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:59:02.166029   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:59:02.189548   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:59:02.212342   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:59:02.238142   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:59:02.265739   60910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:59:02.294907   60910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:59:02.318744   60910 ssh_runner.go:195] Run: openssl version
	I1105 18:59:02.324506   60910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:59:02.335506   60910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:59:02.339839   60910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:59:02.339923   60910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:59:02.345561   60910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:59:02.356220   60910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:59:02.366742   60910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:59:02.370821   60910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:59:02.370892   60910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:59:02.376356   60910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:59:02.388389   60910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:59:02.400234   60910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:59:02.404790   60910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:59:02.404857   60910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:59:02.410509   60910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:59:02.424483   60910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:59:02.429304   60910 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:59:02.429365   60910 kubeadm.go:392] StartCluster: {Name:calico-929548 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:calico-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:59:02.429444   60910 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:59:02.429529   60910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:59:02.473278   60910 cri.go:89] found id: ""
	I1105 18:59:02.473369   60910 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:59:02.486247   60910 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 18:59:02.502388   60910 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:59:02.513087   60910 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:59:02.513110   60910 kubeadm.go:157] found existing configuration files:
	
	I1105 18:59:02.513163   60910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:59:02.523435   60910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:59:02.523506   60910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:59:02.533379   60910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:59:02.542463   60910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:59:02.542529   60910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:59:02.553037   60910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:59:02.561879   60910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:59:02.561949   60910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:59:02.571206   60910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:59:02.580111   60910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:59:02.580183   60910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:59:02.589607   60910 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 18:59:02.746393   60910 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 18:59:03.179585   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:03.179968   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:59:03.179995   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:59:03.179918   62722 retry.go:31] will retry after 1.162588704s: waiting for machine to come up
	I1105 18:59:04.344053   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:04.344426   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:59:04.344473   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:59:04.344407   62722 retry.go:31] will retry after 1.5026653s: waiting for machine to come up
	I1105 18:59:05.849559   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:05.850190   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:59:05.850226   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:59:05.850127   62722 retry.go:31] will retry after 1.796276299s: waiting for machine to come up
	I1105 18:59:07.648355   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:07.648775   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:59:07.648797   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:59:07.648729   62722 retry.go:31] will retry after 2.918809758s: waiting for machine to come up
	I1105 18:59:10.569755   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:10.570288   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:59:10.570315   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:59:10.570243   62722 retry.go:31] will retry after 4.048880434s: waiting for machine to come up
	I1105 18:59:12.500111   60910 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 18:59:12.500195   60910 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 18:59:12.500286   60910 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 18:59:12.500440   60910 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 18:59:12.500583   60910 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 18:59:12.500635   60910 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 18:59:12.502248   60910 out.go:235]   - Generating certificates and keys ...
	I1105 18:59:12.502334   60910 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 18:59:12.502413   60910 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 18:59:12.502502   60910 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 18:59:12.502588   60910 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 18:59:12.502653   60910 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 18:59:12.502710   60910 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 18:59:12.502777   60910 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 18:59:12.502877   60910 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-929548 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I1105 18:59:12.502924   60910 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 18:59:12.503047   60910 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-929548 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I1105 18:59:12.503105   60910 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 18:59:12.503159   60910 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 18:59:12.503195   60910 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 18:59:12.503245   60910 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 18:59:12.503297   60910 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 18:59:12.503364   60910 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 18:59:12.503426   60910 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 18:59:12.503502   60910 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 18:59:12.503580   60910 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 18:59:12.503680   60910 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 18:59:12.503776   60910 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 18:59:12.505009   60910 out.go:235]   - Booting up control plane ...
	I1105 18:59:12.505098   60910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 18:59:12.505173   60910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 18:59:12.505237   60910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 18:59:12.505328   60910 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 18:59:12.505423   60910 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 18:59:12.505461   60910 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 18:59:12.505564   60910 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 18:59:12.505650   60910 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 18:59:12.505715   60910 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001615578s
	I1105 18:59:12.505791   60910 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 18:59:12.505840   60910 kubeadm.go:310] [api-check] The API server is healthy after 5.001372273s
	I1105 18:59:12.505978   60910 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 18:59:12.506154   60910 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 18:59:12.506214   60910 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 18:59:12.506368   60910 kubeadm.go:310] [mark-control-plane] Marking the node calico-929548 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 18:59:12.506420   60910 kubeadm.go:310] [bootstrap-token] Using token: 8qu9z2.pa03534ttic3lh1o
	I1105 18:59:12.508563   60910 out.go:235]   - Configuring RBAC rules ...
	I1105 18:59:12.508652   60910 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 18:59:12.508721   60910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 18:59:12.508841   60910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 18:59:12.508943   60910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 18:59:12.509057   60910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 18:59:12.509159   60910 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 18:59:12.509305   60910 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 18:59:12.509365   60910 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 18:59:12.509430   60910 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 18:59:12.509439   60910 kubeadm.go:310] 
	I1105 18:59:12.509525   60910 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 18:59:12.509541   60910 kubeadm.go:310] 
	I1105 18:59:12.509618   60910 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 18:59:12.509626   60910 kubeadm.go:310] 
	I1105 18:59:12.509647   60910 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 18:59:12.509698   60910 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 18:59:12.509747   60910 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 18:59:12.509754   60910 kubeadm.go:310] 
	I1105 18:59:12.509798   60910 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 18:59:12.509804   60910 kubeadm.go:310] 
	I1105 18:59:12.509874   60910 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 18:59:12.509891   60910 kubeadm.go:310] 
	I1105 18:59:12.509966   60910 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 18:59:12.510061   60910 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 18:59:12.510151   60910 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 18:59:12.510159   60910 kubeadm.go:310] 
	I1105 18:59:12.510262   60910 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 18:59:12.510333   60910 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 18:59:12.510339   60910 kubeadm.go:310] 
	I1105 18:59:12.510410   60910 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8qu9z2.pa03534ttic3lh1o \
	I1105 18:59:12.510499   60910 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 18:59:12.510520   60910 kubeadm.go:310] 	--control-plane 
	I1105 18:59:12.510523   60910 kubeadm.go:310] 
	I1105 18:59:12.510593   60910 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 18:59:12.510599   60910 kubeadm.go:310] 
	I1105 18:59:12.510666   60910 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8qu9z2.pa03534ttic3lh1o \
	I1105 18:59:12.510770   60910 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 18:59:12.510781   60910 cni.go:84] Creating CNI manager for "calico"
	I1105 18:59:12.512086   60910 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1105 18:59:12.513468   60910 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 18:59:12.513488   60910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (323065 bytes)
	I1105 18:59:12.535835   60910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 18:59:13.956361   60910 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.42048724s)
	I1105 18:59:13.956421   60910 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 18:59:13.956518   60910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:13.956528   60910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-929548 minikube.k8s.io/updated_at=2024_11_05T18_59_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=calico-929548 minikube.k8s.io/primary=true
	I1105 18:59:13.973607   60910 ops.go:34] apiserver oom_adj: -16
	I1105 18:59:14.082354   60910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:14.582927   60910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:15.083441   60910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:15.583297   60910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:16.083314   60910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:16.582893   60910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:16.663316   60910 kubeadm.go:1113] duration metric: took 2.706856724s to wait for elevateKubeSystemPrivileges
	I1105 18:59:16.663357   60910 kubeadm.go:394] duration metric: took 14.233996573s to StartCluster
	I1105 18:59:16.663380   60910 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:16.663467   60910 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:59:16.664448   60910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:16.664676   60910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 18:59:16.664675   60910 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:59:16.664698   60910 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 18:59:16.664786   60910 addons.go:69] Setting default-storageclass=true in profile "calico-929548"
	I1105 18:59:16.664820   60910 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-929548"
	I1105 18:59:16.664776   60910 addons.go:69] Setting storage-provisioner=true in profile "calico-929548"
	I1105 18:59:16.664913   60910 addons.go:234] Setting addon storage-provisioner=true in "calico-929548"
	I1105 18:59:16.664945   60910 config.go:182] Loaded profile config "calico-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:59:16.664958   60910 host.go:66] Checking if "calico-929548" exists ...
	I1105 18:59:16.665233   60910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:16.665279   60910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:16.665427   60910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:16.665471   60910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:16.666275   60910 out.go:177] * Verifying Kubernetes components...
	I1105 18:59:16.667405   60910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:59:16.680575   60910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40529
	I1105 18:59:16.680603   60910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I1105 18:59:16.681051   60910 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:16.681102   60910 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:16.681567   60910 main.go:141] libmachine: Using API Version  1
	I1105 18:59:16.681591   60910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:16.681706   60910 main.go:141] libmachine: Using API Version  1
	I1105 18:59:16.681728   60910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:16.681905   60910 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:16.682049   60910 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:16.682091   60910 main.go:141] libmachine: (calico-929548) Calling .GetState
	I1105 18:59:16.682630   60910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:16.682677   60910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:16.684957   60910 addons.go:234] Setting addon default-storageclass=true in "calico-929548"
	I1105 18:59:16.684991   60910 host.go:66] Checking if "calico-929548" exists ...
	I1105 18:59:16.685290   60910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:16.685328   60910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:16.698599   60910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42205
	I1105 18:59:16.699142   60910 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:16.699757   60910 main.go:141] libmachine: Using API Version  1
	I1105 18:59:16.699783   60910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:16.700122   60910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I1105 18:59:16.700301   60910 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:16.700510   60910 main.go:141] libmachine: (calico-929548) Calling .GetState
	I1105 18:59:16.700554   60910 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:16.700949   60910 main.go:141] libmachine: Using API Version  1
	I1105 18:59:16.700967   60910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:16.701304   60910 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:16.701774   60910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:16.701804   60910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:16.702575   60910 main.go:141] libmachine: (calico-929548) Calling .DriverName
	I1105 18:59:16.704644   60910 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:59:14.620265   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:14.620675   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find current IP address of domain custom-flannel-929548 in network mk-custom-flannel-929548
	I1105 18:59:14.620706   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | I1105 18:59:14.620647   62722 retry.go:31] will retry after 3.434584329s: waiting for machine to come up
	I1105 18:59:16.706385   60910 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:59:16.706398   60910 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 18:59:16.706413   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:59:16.709670   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:59:16.710141   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:59:16.710155   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:59:16.710309   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:59:16.710458   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:59:16.710572   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:59:16.710664   60910 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/calico-929548/id_rsa Username:docker}
	I1105 18:59:16.718112   60910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36307
	I1105 18:59:16.718517   60910 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:16.718982   60910 main.go:141] libmachine: Using API Version  1
	I1105 18:59:16.719003   60910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:16.719317   60910 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:16.719504   60910 main.go:141] libmachine: (calico-929548) Calling .GetState
	I1105 18:59:16.720912   60910 main.go:141] libmachine: (calico-929548) Calling .DriverName
	I1105 18:59:16.721119   60910 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 18:59:16.721138   60910 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 18:59:16.721157   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHHostname
	I1105 18:59:16.724003   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:59:16.724440   60910 main.go:141] libmachine: (calico-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:e9:b6", ip: ""} in network mk-calico-929548: {Iface:virbr1 ExpiryTime:2024-11-05 19:58:48 +0000 UTC Type:0 Mac:52:54:00:76:e9:b6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:calico-929548 Clientid:01:52:54:00:76:e9:b6}
	I1105 18:59:16.724466   60910 main.go:141] libmachine: (calico-929548) DBG | domain calico-929548 has defined IP address 192.168.39.203 and MAC address 52:54:00:76:e9:b6 in network mk-calico-929548
	I1105 18:59:16.724767   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHPort
	I1105 18:59:16.724907   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHKeyPath
	I1105 18:59:16.725046   60910 main.go:141] libmachine: (calico-929548) Calling .GetSSHUsername
	I1105 18:59:16.725152   60910 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/calico-929548/id_rsa Username:docker}
	I1105 18:59:16.899293   60910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:59:16.899307   60910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 18:59:17.012079   60910 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 18:59:17.102436   60910 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:59:17.354403   60910 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1105 18:59:17.354570   60910 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:17.354598   60910 main.go:141] libmachine: (calico-929548) Calling .Close
	I1105 18:59:17.354923   60910 main.go:141] libmachine: (calico-929548) DBG | Closing plugin on server side
	I1105 18:59:17.354963   60910 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:17.355003   60910 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:17.355021   60910 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:17.355032   60910 main.go:141] libmachine: (calico-929548) Calling .Close
	I1105 18:59:17.355678   60910 node_ready.go:35] waiting up to 15m0s for node "calico-929548" to be "Ready" ...
	I1105 18:59:17.356255   60910 main.go:141] libmachine: (calico-929548) DBG | Closing plugin on server side
	I1105 18:59:17.356319   60910 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:17.356340   60910 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:17.400737   60910 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:17.400761   60910 main.go:141] libmachine: (calico-929548) Calling .Close
	I1105 18:59:17.401013   60910 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:17.401030   60910 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:17.764290   60910 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:17.764322   60910 main.go:141] libmachine: (calico-929548) Calling .Close
	I1105 18:59:17.764602   60910 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:17.764628   60910 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:17.764633   60910 main.go:141] libmachine: (calico-929548) DBG | Closing plugin on server side
	I1105 18:59:17.764643   60910 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:17.764653   60910 main.go:141] libmachine: (calico-929548) Calling .Close
	I1105 18:59:17.764889   60910 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:17.764922   60910 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:17.764930   60910 main.go:141] libmachine: (calico-929548) DBG | Closing plugin on server side
	I1105 18:59:17.767146   60910 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1105 18:59:19.559743   62292 start.go:364] duration metric: took 45.65812981s to acquireMachinesLock for "enable-default-cni-929548"
	I1105 18:59:19.559807   62292 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-929548 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:59:19.559925   62292 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 18:59:18.057010   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.057457   60943 main.go:141] libmachine: (custom-flannel-929548) Found IP for machine: 192.168.50.88
	I1105 18:59:18.057489   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has current primary IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.057514   60943 main.go:141] libmachine: (custom-flannel-929548) Reserving static IP address...
	I1105 18:59:18.057768   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | unable to find host DHCP lease matching {name: "custom-flannel-929548", mac: "52:54:00:4f:e9:82", ip: "192.168.50.88"} in network mk-custom-flannel-929548
	I1105 18:59:18.132810   60943 main.go:141] libmachine: (custom-flannel-929548) Reserved static IP address: 192.168.50.88
	I1105 18:59:18.132840   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Getting to WaitForSSH function...
	I1105 18:59:18.132849   60943 main.go:141] libmachine: (custom-flannel-929548) Waiting for SSH to be available...
	I1105 18:59:18.135978   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.136352   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:18.136375   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.136516   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Using SSH client type: external
	I1105 18:59:18.136536   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/id_rsa (-rw-------)
	I1105 18:59:18.136571   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:59:18.136588   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | About to run SSH command:
	I1105 18:59:18.136620   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | exit 0
	I1105 18:59:18.270824   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | SSH cmd err, output: <nil>: 
	I1105 18:59:18.271101   60943 main.go:141] libmachine: (custom-flannel-929548) KVM machine creation complete!
	I1105 18:59:18.271377   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetConfigRaw
	I1105 18:59:18.271988   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:59:18.272144   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:59:18.272296   60943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:59:18.272308   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetState
	I1105 18:59:18.273648   60943 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:59:18.273661   60943 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:59:18.273666   60943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:59:18.273672   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:18.276227   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.276634   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:18.276673   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.276745   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:18.276898   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:18.277044   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:18.277152   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:18.277306   60943 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:18.277556   60943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.88 22 <nil> <nil>}
	I1105 18:59:18.277574   60943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:59:18.390050   60943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:59:18.390074   60943 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:59:18.390084   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:18.392976   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.393321   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:18.393357   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.393520   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:18.393696   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:18.393862   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:18.393962   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:18.394120   60943 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:18.394279   60943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.88 22 <nil> <nil>}
	I1105 18:59:18.394289   60943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:59:18.507456   60943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:59:18.507532   60943 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:59:18.507541   60943 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:59:18.507551   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetMachineName
	I1105 18:59:18.507794   60943 buildroot.go:166] provisioning hostname "custom-flannel-929548"
	I1105 18:59:18.507816   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetMachineName
	I1105 18:59:18.507997   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:18.510587   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.510945   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:18.510985   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.511150   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:18.511325   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:18.511458   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:18.511564   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:18.511712   60943 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:18.511877   60943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.88 22 <nil> <nil>}
	I1105 18:59:18.511888   60943 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-929548 && echo "custom-flannel-929548" | sudo tee /etc/hostname
	I1105 18:59:18.635996   60943 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-929548
	
	I1105 18:59:18.636026   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:18.638379   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.638729   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:18.638771   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.638998   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:18.639173   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:18.639320   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:18.639432   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:18.639602   60943 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:18.639822   60943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.88 22 <nil> <nil>}
	I1105 18:59:18.639840   60943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-929548' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-929548/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-929548' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:59:18.759554   60943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:59:18.759582   60943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:59:18.759610   60943 buildroot.go:174] setting up certificates
	I1105 18:59:18.759619   60943 provision.go:84] configureAuth start
	I1105 18:59:18.759628   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetMachineName
	I1105 18:59:18.759929   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetIP
	I1105 18:59:18.762693   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.763070   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:18.763099   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.763294   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:18.765579   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.765944   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:18.765965   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.766117   60943 provision.go:143] copyHostCerts
	I1105 18:59:18.766180   60943 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:59:18.766201   60943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:59:18.766273   60943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:59:18.766408   60943 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:59:18.766423   60943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:59:18.766453   60943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:59:18.766585   60943 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:59:18.766599   60943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:59:18.766628   60943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:59:18.766721   60943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-929548 san=[127.0.0.1 192.168.50.88 custom-flannel-929548 localhost minikube]
	I1105 18:59:18.897869   60943 provision.go:177] copyRemoteCerts
	I1105 18:59:18.897930   60943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:59:18.897952   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:18.900482   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.900790   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:18.900821   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:18.901036   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:18.901208   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:18.901336   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:18.901434   60943 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/id_rsa Username:docker}
	I1105 18:59:18.990271   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:59:19.016470   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:59:19.038776   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1105 18:59:19.061279   60943 provision.go:87] duration metric: took 301.649359ms to configureAuth
	I1105 18:59:19.061303   60943 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:59:19.061449   60943 config.go:182] Loaded profile config "custom-flannel-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:59:19.061509   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:19.064200   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.064524   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:19.064553   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.064737   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:19.064918   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:19.065076   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:19.065205   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:19.065314   60943 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:19.065488   60943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.88 22 <nil> <nil>}
	I1105 18:59:19.065503   60943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:59:19.308624   60943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:59:19.308648   60943 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:59:19.308675   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetURL
	I1105 18:59:19.310152   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Using libvirt version 6000000
	I1105 18:59:19.312541   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.312930   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:19.312961   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.313091   60943 main.go:141] libmachine: Docker is up and running!
	I1105 18:59:19.313103   60943 main.go:141] libmachine: Reticulating splines...
	I1105 18:59:19.313110   60943 client.go:171] duration metric: took 23.068342282s to LocalClient.Create
	I1105 18:59:19.313129   60943 start.go:167] duration metric: took 23.068410371s to libmachine.API.Create "custom-flannel-929548"
	I1105 18:59:19.313135   60943 start.go:293] postStartSetup for "custom-flannel-929548" (driver="kvm2")
	I1105 18:59:19.313145   60943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:59:19.313161   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:59:19.313480   60943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:59:19.313504   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:19.315651   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.315953   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:19.315989   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.316080   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:19.316263   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:19.316404   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:19.316528   60943 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/id_rsa Username:docker}
	I1105 18:59:19.401329   60943 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:59:19.405231   60943 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:59:19.405267   60943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:59:19.405345   60943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:59:19.405444   60943 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:59:19.405547   60943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:59:19.415819   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:59:19.438605   60943 start.go:296] duration metric: took 125.457135ms for postStartSetup
	I1105 18:59:19.438655   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetConfigRaw
	I1105 18:59:19.439234   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetIP
	I1105 18:59:19.441942   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.442339   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:19.442364   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.442617   60943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/config.json ...
	I1105 18:59:19.442795   60943 start.go:128] duration metric: took 23.222450624s to createHost
	I1105 18:59:19.442815   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:19.445129   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.445478   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:19.445503   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.445658   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:19.445847   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:19.445994   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:19.446104   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:19.446207   60943 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:19.446393   60943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.88 22 <nil> <nil>}
	I1105 18:59:19.446405   60943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:59:19.559590   60943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833159.533188614
	
	I1105 18:59:19.559612   60943 fix.go:216] guest clock: 1730833159.533188614
	I1105 18:59:19.559621   60943 fix.go:229] Guest: 2024-11-05 18:59:19.533188614 +0000 UTC Remote: 2024-11-05 18:59:19.442806243 +0000 UTC m=+57.623998707 (delta=90.382371ms)
	I1105 18:59:19.559644   60943 fix.go:200] guest clock delta is within tolerance: 90.382371ms
	I1105 18:59:19.559650   60943 start.go:83] releasing machines lock for "custom-flannel-929548", held for 23.339460076s
	I1105 18:59:19.559681   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:59:19.559934   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetIP
	I1105 18:59:19.562693   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.563111   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:19.563157   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.563345   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:59:19.563875   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:59:19.564036   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:59:19.564140   60943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:59:19.564181   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:19.564217   60943 ssh_runner.go:195] Run: cat /version.json
	I1105 18:59:19.564243   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:19.567130   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.567153   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.567566   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:19.567601   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.567632   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:19.567649   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:19.567778   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:19.567974   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:19.568026   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:19.568174   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:19.568184   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:19.568343   60943 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/id_rsa Username:docker}
	I1105 18:59:19.568403   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:19.568534   60943 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/id_rsa Username:docker}
	I1105 18:59:19.656283   60943 ssh_runner.go:195] Run: systemctl --version
	I1105 18:59:19.690368   60943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:59:19.850652   60943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:59:19.856674   60943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:59:19.856759   60943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:59:19.873098   60943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:59:19.873127   60943 start.go:495] detecting cgroup driver to use...
	I1105 18:59:19.873208   60943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:59:19.890445   60943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:59:19.905207   60943 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:59:19.905262   60943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:59:19.920582   60943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:59:19.935679   60943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:59:20.058607   60943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:59:20.242446   60943 docker.go:233] disabling docker service ...
	I1105 18:59:20.242518   60943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:59:20.261356   60943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:59:20.276483   60943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:59:20.416888   60943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:59:20.554290   60943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:59:20.569097   60943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:59:20.588116   60943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:59:20.588173   60943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:20.601577   60943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:59:20.601637   60943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:20.614708   60943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:20.626265   60943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:20.637438   60943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:59:20.649445   60943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:20.660653   60943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:20.679440   60943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:20.689410   60943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:59:20.699179   60943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:59:20.699246   60943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:59:20.712471   60943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:59:20.722539   60943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:59:20.897821   60943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:59:20.996415   60943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:59:20.996488   60943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:59:21.002270   60943 start.go:563] Will wait 60s for crictl version
	I1105 18:59:21.002341   60943 ssh_runner.go:195] Run: which crictl
	I1105 18:59:21.007097   60943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:59:21.053815   60943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:59:21.053911   60943 ssh_runner.go:195] Run: crio --version
	I1105 18:59:21.085550   60943 ssh_runner.go:195] Run: crio --version
	I1105 18:59:21.121104   60943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:59:17.768332   60910 addons.go:510] duration metric: took 1.103635124s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1105 18:59:17.862625   60910 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-929548" context rescaled to 1 replicas
	I1105 18:59:19.359063   60910 node_ready.go:53] node "calico-929548" has status "Ready":"False"
	I1105 18:59:21.123081   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetIP
	I1105 18:59:21.126807   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:21.127355   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:21.127385   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:21.127613   60943 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1105 18:59:21.132017   60943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:59:21.148425   60943 kubeadm.go:883] updating cluster {Name:custom-flannel-929548 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.31.2 ClusterName:custom-flannel-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.88 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:59:21.148574   60943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:59:21.148652   60943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:59:21.194380   60943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 18:59:21.194471   60943 ssh_runner.go:195] Run: which lz4
	I1105 18:59:21.199225   60943 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 18:59:21.204272   60943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 18:59:21.204306   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 18:59:19.562000   62292 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1105 18:59:19.562189   62292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:19.562238   62292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:19.578876   62292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1105 18:59:19.579455   62292 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:19.580071   62292 main.go:141] libmachine: Using API Version  1
	I1105 18:59:19.580091   62292 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:19.580417   62292 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:19.580597   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetMachineName
	I1105 18:59:19.580757   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 18:59:19.580967   62292 start.go:159] libmachine.API.Create for "enable-default-cni-929548" (driver="kvm2")
	I1105 18:59:19.581001   62292 client.go:168] LocalClient.Create starting
	I1105 18:59:19.581034   62292 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 18:59:19.581072   62292 main.go:141] libmachine: Decoding PEM data...
	I1105 18:59:19.581093   62292 main.go:141] libmachine: Parsing certificate...
	I1105 18:59:19.581156   62292 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 18:59:19.581188   62292 main.go:141] libmachine: Decoding PEM data...
	I1105 18:59:19.581208   62292 main.go:141] libmachine: Parsing certificate...
	I1105 18:59:19.581231   62292 main.go:141] libmachine: Running pre-create checks...
	I1105 18:59:19.581243   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .PreCreateCheck
	I1105 18:59:19.581681   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetConfigRaw
	I1105 18:59:19.582154   62292 main.go:141] libmachine: Creating machine...
	I1105 18:59:19.582172   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .Create
	I1105 18:59:19.582307   62292 main.go:141] libmachine: (enable-default-cni-929548) Creating KVM machine...
	I1105 18:59:19.583552   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found existing default KVM network
	I1105 18:59:19.584690   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:19.584545   62953 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:71:1d:0d} reservation:<nil>}
	I1105 18:59:19.585561   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:19.585473   62953 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:89:0c} reservation:<nil>}
	I1105 18:59:19.586302   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:19.586227   62953 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:14:df:f3} reservation:<nil>}
	I1105 18:59:19.587519   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:19.587426   62953 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003911c0}
	I1105 18:59:19.587549   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | created network xml: 
	I1105 18:59:19.587560   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | <network>
	I1105 18:59:19.587567   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG |   <name>mk-enable-default-cni-929548</name>
	I1105 18:59:19.587583   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG |   <dns enable='no'/>
	I1105 18:59:19.587592   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG |   
	I1105 18:59:19.587612   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1105 18:59:19.587624   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG |     <dhcp>
	I1105 18:59:19.587661   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1105 18:59:19.587691   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG |     </dhcp>
	I1105 18:59:19.587704   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG |   </ip>
	I1105 18:59:19.587714   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG |   
	I1105 18:59:19.587723   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | </network>
	I1105 18:59:19.587730   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | 
	I1105 18:59:19.593246   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | trying to create private KVM network mk-enable-default-cni-929548 192.168.72.0/24...
	I1105 18:59:19.670355   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | private KVM network mk-enable-default-cni-929548 192.168.72.0/24 created
	I1105 18:59:19.670386   62292 main.go:141] libmachine: (enable-default-cni-929548) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548 ...
	I1105 18:59:19.670414   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:19.670328   62953 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:59:19.670433   62292 main.go:141] libmachine: (enable-default-cni-929548) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 18:59:19.670505   62292 main.go:141] libmachine: (enable-default-cni-929548) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 18:59:19.928362   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:19.928214   62953 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa...
	I1105 18:59:20.200422   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:20.200300   62953 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/enable-default-cni-929548.rawdisk...
	I1105 18:59:20.200451   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Writing magic tar header
	I1105 18:59:20.200467   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Writing SSH key tar header
	I1105 18:59:20.200480   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:20.200409   62953 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548 ...
	I1105 18:59:20.200502   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548
	I1105 18:59:20.200578   62292 main.go:141] libmachine: (enable-default-cni-929548) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548 (perms=drwx------)
	I1105 18:59:20.200621   62292 main.go:141] libmachine: (enable-default-cni-929548) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 18:59:20.200637   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 18:59:20.200651   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:59:20.200664   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 18:59:20.200688   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 18:59:20.200702   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Checking permissions on dir: /home/jenkins
	I1105 18:59:20.200712   62292 main.go:141] libmachine: (enable-default-cni-929548) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 18:59:20.200725   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Checking permissions on dir: /home
	I1105 18:59:20.200737   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Skipping /home - not owner
	I1105 18:59:20.200754   62292 main.go:141] libmachine: (enable-default-cni-929548) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 18:59:20.200766   62292 main.go:141] libmachine: (enable-default-cni-929548) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 18:59:20.200777   62292 main.go:141] libmachine: (enable-default-cni-929548) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 18:59:20.200783   62292 main.go:141] libmachine: (enable-default-cni-929548) Creating domain...
	I1105 18:59:20.202000   62292 main.go:141] libmachine: (enable-default-cni-929548) define libvirt domain using xml: 
	I1105 18:59:20.202021   62292 main.go:141] libmachine: (enable-default-cni-929548) <domain type='kvm'>
	I1105 18:59:20.202033   62292 main.go:141] libmachine: (enable-default-cni-929548)   <name>enable-default-cni-929548</name>
	I1105 18:59:20.202042   62292 main.go:141] libmachine: (enable-default-cni-929548)   <memory unit='MiB'>3072</memory>
	I1105 18:59:20.202052   62292 main.go:141] libmachine: (enable-default-cni-929548)   <vcpu>2</vcpu>
	I1105 18:59:20.202059   62292 main.go:141] libmachine: (enable-default-cni-929548)   <features>
	I1105 18:59:20.202068   62292 main.go:141] libmachine: (enable-default-cni-929548)     <acpi/>
	I1105 18:59:20.202077   62292 main.go:141] libmachine: (enable-default-cni-929548)     <apic/>
	I1105 18:59:20.202098   62292 main.go:141] libmachine: (enable-default-cni-929548)     <pae/>
	I1105 18:59:20.202111   62292 main.go:141] libmachine: (enable-default-cni-929548)     
	I1105 18:59:20.202118   62292 main.go:141] libmachine: (enable-default-cni-929548)   </features>
	I1105 18:59:20.202125   62292 main.go:141] libmachine: (enable-default-cni-929548)   <cpu mode='host-passthrough'>
	I1105 18:59:20.202133   62292 main.go:141] libmachine: (enable-default-cni-929548)   
	I1105 18:59:20.202143   62292 main.go:141] libmachine: (enable-default-cni-929548)   </cpu>
	I1105 18:59:20.202151   62292 main.go:141] libmachine: (enable-default-cni-929548)   <os>
	I1105 18:59:20.202162   62292 main.go:141] libmachine: (enable-default-cni-929548)     <type>hvm</type>
	I1105 18:59:20.202174   62292 main.go:141] libmachine: (enable-default-cni-929548)     <boot dev='cdrom'/>
	I1105 18:59:20.202183   62292 main.go:141] libmachine: (enable-default-cni-929548)     <boot dev='hd'/>
	I1105 18:59:20.202198   62292 main.go:141] libmachine: (enable-default-cni-929548)     <bootmenu enable='no'/>
	I1105 18:59:20.202206   62292 main.go:141] libmachine: (enable-default-cni-929548)   </os>
	I1105 18:59:20.202214   62292 main.go:141] libmachine: (enable-default-cni-929548)   <devices>
	I1105 18:59:20.202225   62292 main.go:141] libmachine: (enable-default-cni-929548)     <disk type='file' device='cdrom'>
	I1105 18:59:20.202239   62292 main.go:141] libmachine: (enable-default-cni-929548)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/boot2docker.iso'/>
	I1105 18:59:20.202248   62292 main.go:141] libmachine: (enable-default-cni-929548)       <target dev='hdc' bus='scsi'/>
	I1105 18:59:20.202257   62292 main.go:141] libmachine: (enable-default-cni-929548)       <readonly/>
	I1105 18:59:20.202266   62292 main.go:141] libmachine: (enable-default-cni-929548)     </disk>
	I1105 18:59:20.202276   62292 main.go:141] libmachine: (enable-default-cni-929548)     <disk type='file' device='disk'>
	I1105 18:59:20.202288   62292 main.go:141] libmachine: (enable-default-cni-929548)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 18:59:20.202304   62292 main.go:141] libmachine: (enable-default-cni-929548)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/enable-default-cni-929548.rawdisk'/>
	I1105 18:59:20.202315   62292 main.go:141] libmachine: (enable-default-cni-929548)       <target dev='hda' bus='virtio'/>
	I1105 18:59:20.202326   62292 main.go:141] libmachine: (enable-default-cni-929548)     </disk>
	I1105 18:59:20.202336   62292 main.go:141] libmachine: (enable-default-cni-929548)     <interface type='network'>
	I1105 18:59:20.202349   62292 main.go:141] libmachine: (enable-default-cni-929548)       <source network='mk-enable-default-cni-929548'/>
	I1105 18:59:20.202360   62292 main.go:141] libmachine: (enable-default-cni-929548)       <model type='virtio'/>
	I1105 18:59:20.202368   62292 main.go:141] libmachine: (enable-default-cni-929548)     </interface>
	I1105 18:59:20.202378   62292 main.go:141] libmachine: (enable-default-cni-929548)     <interface type='network'>
	I1105 18:59:20.202387   62292 main.go:141] libmachine: (enable-default-cni-929548)       <source network='default'/>
	I1105 18:59:20.202398   62292 main.go:141] libmachine: (enable-default-cni-929548)       <model type='virtio'/>
	I1105 18:59:20.202410   62292 main.go:141] libmachine: (enable-default-cni-929548)     </interface>
	I1105 18:59:20.202418   62292 main.go:141] libmachine: (enable-default-cni-929548)     <serial type='pty'>
	I1105 18:59:20.202428   62292 main.go:141] libmachine: (enable-default-cni-929548)       <target port='0'/>
	I1105 18:59:20.202444   62292 main.go:141] libmachine: (enable-default-cni-929548)     </serial>
	I1105 18:59:20.202457   62292 main.go:141] libmachine: (enable-default-cni-929548)     <console type='pty'>
	I1105 18:59:20.202468   62292 main.go:141] libmachine: (enable-default-cni-929548)       <target type='serial' port='0'/>
	I1105 18:59:20.202475   62292 main.go:141] libmachine: (enable-default-cni-929548)     </console>
	I1105 18:59:20.202485   62292 main.go:141] libmachine: (enable-default-cni-929548)     <rng model='virtio'>
	I1105 18:59:20.202496   62292 main.go:141] libmachine: (enable-default-cni-929548)       <backend model='random'>/dev/random</backend>
	I1105 18:59:20.202505   62292 main.go:141] libmachine: (enable-default-cni-929548)     </rng>
	I1105 18:59:20.202513   62292 main.go:141] libmachine: (enable-default-cni-929548)     
	I1105 18:59:20.202525   62292 main.go:141] libmachine: (enable-default-cni-929548)     
	I1105 18:59:20.202534   62292 main.go:141] libmachine: (enable-default-cni-929548)   </devices>
	I1105 18:59:20.202543   62292 main.go:141] libmachine: (enable-default-cni-929548) </domain>
	I1105 18:59:20.202554   62292 main.go:141] libmachine: (enable-default-cni-929548) 
	I1105 18:59:20.209454   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:fb:ab:e4 in network default
	I1105 18:59:20.210030   62292 main.go:141] libmachine: (enable-default-cni-929548) Ensuring networks are active...
	I1105 18:59:20.210053   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:20.210922   62292 main.go:141] libmachine: (enable-default-cni-929548) Ensuring network default is active
	I1105 18:59:20.211284   62292 main.go:141] libmachine: (enable-default-cni-929548) Ensuring network mk-enable-default-cni-929548 is active
	I1105 18:59:20.211938   62292 main.go:141] libmachine: (enable-default-cni-929548) Getting domain xml...
	I1105 18:59:20.212737   62292 main.go:141] libmachine: (enable-default-cni-929548) Creating domain...
	I1105 18:59:21.705189   62292 main.go:141] libmachine: (enable-default-cni-929548) Waiting to get IP...
	I1105 18:59:21.706615   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:21.707297   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:21.707332   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:21.707273   62953 retry.go:31] will retry after 256.021001ms: waiting for machine to come up
	I1105 18:59:21.964967   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:21.965594   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:21.965617   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:21.965510   62953 retry.go:31] will retry after 240.330693ms: waiting for machine to come up
	I1105 18:59:22.208000   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:22.208674   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:22.208695   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:22.208577   62953 retry.go:31] will retry after 377.897384ms: waiting for machine to come up
	I1105 18:59:22.588052   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:22.588663   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:22.588689   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:22.588612   62953 retry.go:31] will retry after 424.190044ms: waiting for machine to come up
	I1105 18:59:23.014172   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:23.014824   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:23.014847   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:23.014776   62953 retry.go:31] will retry after 639.362615ms: waiting for machine to come up
	I1105 18:59:23.655582   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:23.656220   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:23.656260   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:23.656171   62953 retry.go:31] will retry after 800.393851ms: waiting for machine to come up
	I1105 18:59:21.359995   60910 node_ready.go:53] node "calico-929548" has status "Ready":"False"
	I1105 18:59:23.362840   60910 node_ready.go:53] node "calico-929548" has status "Ready":"False"
	I1105 18:59:25.860103   60910 node_ready.go:53] node "calico-929548" has status "Ready":"False"
	I1105 18:59:22.565801   60943 crio.go:462] duration metric: took 1.366611901s to copy over tarball
	I1105 18:59:22.565876   60943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 18:59:25.069240   60943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.503333556s)
	I1105 18:59:25.069270   60943 crio.go:469] duration metric: took 2.503439734s to extract the tarball
	I1105 18:59:25.069280   60943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 18:59:25.107795   60943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:59:25.151154   60943 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:59:25.151179   60943 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:59:25.151188   60943 kubeadm.go:934] updating node { 192.168.50.88 8443 v1.31.2 crio true true} ...
	I1105 18:59:25.151311   60943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-929548 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1105 18:59:25.151421   60943 ssh_runner.go:195] Run: crio config
	I1105 18:59:25.210706   60943 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1105 18:59:25.210753   60943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:59:25.210780   60943 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.88 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-929548 NodeName:custom-flannel-929548 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:59:25.210963   60943 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-929548"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.88"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.88"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:59:25.211047   60943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:59:25.220619   60943 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:59:25.220710   60943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 18:59:25.231255   60943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1105 18:59:25.253083   60943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:59:25.270190   60943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1105 18:59:25.287659   60943 ssh_runner.go:195] Run: grep 192.168.50.88	control-plane.minikube.internal$ /etc/hosts
	I1105 18:59:25.291525   60943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:59:25.303209   60943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:59:25.426075   60943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:59:25.442492   60943 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548 for IP: 192.168.50.88
	I1105 18:59:25.442515   60943 certs.go:194] generating shared ca certs ...
	I1105 18:59:25.442535   60943 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:25.442734   60943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:59:25.442803   60943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:59:25.442817   60943 certs.go:256] generating profile certs ...
	I1105 18:59:25.442883   60943 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.key
	I1105 18:59:25.442900   60943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt with IP's: []
	I1105 18:59:25.701647   60943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt ...
	I1105 18:59:25.701678   60943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: {Name:mk12c8d75120612da496ef0144870515ad92c38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:25.701836   60943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.key ...
	I1105 18:59:25.701849   60943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.key: {Name:mk66a031b5f2bdb704683c8bec6c84c789f6f77c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:25.701928   60943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.key.f0e26a26
	I1105 18:59:25.701950   60943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.crt.f0e26a26 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.88]
	I1105 18:59:25.876628   60943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.crt.f0e26a26 ...
	I1105 18:59:25.876655   60943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.crt.f0e26a26: {Name:mkcad5ac37445dfd5245ed88f3360a6360f44814 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:25.876821   60943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.key.f0e26a26 ...
	I1105 18:59:25.876833   60943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.key.f0e26a26: {Name:mke6b2095a4cc503654b154403825fec783443d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:25.876909   60943 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.crt.f0e26a26 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.crt
	I1105 18:59:25.877003   60943 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.key.f0e26a26 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.key
	I1105 18:59:25.877063   60943 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/proxy-client.key
	I1105 18:59:25.877076   60943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/proxy-client.crt with IP's: []
	I1105 18:59:26.002906   60943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/proxy-client.crt ...
	I1105 18:59:26.002936   60943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/proxy-client.crt: {Name:mk47846a8c725fdf9c8757b2d132835b5264e928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:26.003117   60943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/proxy-client.key ...
	I1105 18:59:26.003133   60943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/proxy-client.key: {Name:mk5f0ac7e8affa1e46fb1e21f70afa030d560eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:26.003305   60943 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:59:26.003342   60943 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:59:26.003349   60943 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:59:26.003370   60943 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:59:26.003394   60943 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:59:26.003418   60943 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:59:26.003464   60943 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:59:26.004012   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:59:26.033860   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:59:26.059505   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:59:26.083873   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:59:26.115200   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 18:59:26.145571   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:59:26.170648   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:59:26.195746   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:59:26.219443   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:59:26.247801   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:59:26.275439   60943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:59:26.301734   60943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:59:26.319397   60943 ssh_runner.go:195] Run: openssl version
	I1105 18:59:26.325547   60943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:59:26.336472   60943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:59:26.341264   60943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:59:26.341330   60943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:59:26.347026   60943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:59:26.357104   60943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:59:26.366946   60943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:59:26.371610   60943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:59:26.371656   60943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:59:26.377473   60943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:59:26.388845   60943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:59:26.400014   60943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:59:26.404619   60943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:59:26.404682   60943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:59:26.410302   60943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:59:26.420790   60943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:59:26.425998   60943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:59:26.426073   60943 kubeadm.go:392] StartCluster: {Name:custom-flannel-929548 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.2 ClusterName:custom-flannel-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.88 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:59:26.426167   60943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:59:26.426246   60943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:59:26.465121   60943 cri.go:89] found id: ""
	I1105 18:59:26.465212   60943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:59:26.479057   60943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 18:59:26.490120   60943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:59:26.502412   60943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:59:26.502439   60943 kubeadm.go:157] found existing configuration files:
	
	I1105 18:59:26.502533   60943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:59:26.515501   60943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:59:26.515568   60943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:59:26.529864   60943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:59:26.542963   60943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:59:26.543108   60943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:59:26.556703   60943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:59:26.566057   60943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:59:26.566133   60943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:59:26.578550   60943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:59:26.587547   60943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:59:26.587631   60943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:59:26.600293   60943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 18:59:26.779051   60943 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 18:59:24.458506   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:24.459190   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:24.459215   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:24.459151   62953 retry.go:31] will retry after 937.774318ms: waiting for machine to come up
	I1105 18:59:25.398160   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:25.398676   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:25.398710   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:25.398620   62953 retry.go:31] will retry after 1.089594157s: waiting for machine to come up
	I1105 18:59:26.489787   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:26.490375   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:26.490406   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:26.490327   62953 retry.go:31] will retry after 1.688222556s: waiting for machine to come up
	I1105 18:59:28.180486   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:28.180981   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:28.181005   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:28.180931   62953 retry.go:31] will retry after 1.798880803s: waiting for machine to come up
	I1105 18:59:27.867934   60910 node_ready.go:53] node "calico-929548" has status "Ready":"False"
	I1105 18:59:28.669570   60910 node_ready.go:49] node "calico-929548" has status "Ready":"True"
	I1105 18:59:28.669597   60910 node_ready.go:38] duration metric: took 11.313885148s for node "calico-929548" to be "Ready" ...
	I1105 18:59:28.669609   60910 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:59:28.683984   60910 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:30.691770   60910 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:29.981918   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:29.982519   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:29.982551   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:29.982466   62953 retry.go:31] will retry after 2.716507028s: waiting for machine to come up
	I1105 18:59:32.701771   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:32.702378   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:32.702413   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:32.702332   62953 retry.go:31] will retry after 2.973717798s: waiting for machine to come up
	I1105 18:59:33.189521   60910 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:35.190255   60910 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:37.230810   60943 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 18:59:37.230899   60943 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 18:59:37.231060   60943 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 18:59:37.231185   60943 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 18:59:37.231353   60943 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 18:59:37.231450   60943 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 18:59:37.233227   60943 out.go:235]   - Generating certificates and keys ...
	I1105 18:59:37.233327   60943 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 18:59:37.233444   60943 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 18:59:37.233536   60943 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 18:59:37.233635   60943 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 18:59:37.233726   60943 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 18:59:37.233829   60943 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 18:59:37.233898   60943 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 18:59:37.234031   60943 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-929548 localhost] and IPs [192.168.50.88 127.0.0.1 ::1]
	I1105 18:59:37.234106   60943 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 18:59:37.234316   60943 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-929548 localhost] and IPs [192.168.50.88 127.0.0.1 ::1]
	I1105 18:59:37.234417   60943 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 18:59:37.234512   60943 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 18:59:37.234587   60943 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 18:59:37.234672   60943 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 18:59:37.234761   60943 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 18:59:37.234854   60943 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 18:59:37.234922   60943 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 18:59:37.235035   60943 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 18:59:37.235138   60943 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 18:59:37.235276   60943 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 18:59:37.235387   60943 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 18:59:37.236901   60943 out.go:235]   - Booting up control plane ...
	I1105 18:59:37.237064   60943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 18:59:37.237212   60943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 18:59:37.237335   60943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 18:59:37.237520   60943 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 18:59:37.237647   60943 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 18:59:37.237705   60943 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 18:59:37.237859   60943 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 18:59:37.237981   60943 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 18:59:37.238055   60943 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.032083ms
	I1105 18:59:37.238151   60943 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 18:59:37.238244   60943 kubeadm.go:310] [api-check] The API server is healthy after 5.501218273s
	I1105 18:59:37.238375   60943 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 18:59:37.238529   60943 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 18:59:37.238608   60943 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 18:59:37.238826   60943 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-929548 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 18:59:37.238898   60943 kubeadm.go:310] [bootstrap-token] Using token: 69lzve.u0r8wxnz94sdmjvy
	I1105 18:59:37.240508   60943 out.go:235]   - Configuring RBAC rules ...
	I1105 18:59:37.240654   60943 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 18:59:37.240763   60943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 18:59:37.240909   60943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 18:59:37.241018   60943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 18:59:37.241111   60943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 18:59:37.241214   60943 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 18:59:37.241376   60943 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 18:59:37.241442   60943 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 18:59:37.241553   60943 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 18:59:37.241563   60943 kubeadm.go:310] 
	I1105 18:59:37.241644   60943 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 18:59:37.241654   60943 kubeadm.go:310] 
	I1105 18:59:37.241788   60943 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 18:59:37.241800   60943 kubeadm.go:310] 
	I1105 18:59:37.241831   60943 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 18:59:37.241913   60943 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 18:59:37.241980   60943 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 18:59:37.241991   60943 kubeadm.go:310] 
	I1105 18:59:37.242109   60943 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 18:59:37.242129   60943 kubeadm.go:310] 
	I1105 18:59:37.242198   60943 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 18:59:37.242212   60943 kubeadm.go:310] 
	I1105 18:59:37.242286   60943 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 18:59:37.242421   60943 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 18:59:37.242516   60943 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 18:59:37.242525   60943 kubeadm.go:310] 
	I1105 18:59:37.242636   60943 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 18:59:37.242739   60943 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 18:59:37.242751   60943 kubeadm.go:310] 
	I1105 18:59:37.242861   60943 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 69lzve.u0r8wxnz94sdmjvy \
	I1105 18:59:37.243009   60943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 18:59:37.243040   60943 kubeadm.go:310] 	--control-plane 
	I1105 18:59:37.243054   60943 kubeadm.go:310] 
	I1105 18:59:37.243141   60943 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 18:59:37.243154   60943 kubeadm.go:310] 
	I1105 18:59:37.243238   60943 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 69lzve.u0r8wxnz94sdmjvy \
	I1105 18:59:37.243376   60943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 18:59:37.243390   60943 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1105 18:59:37.245686   60943 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1105 18:59:35.677339   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:35.677719   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:35.677758   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:35.677678   62953 retry.go:31] will retry after 4.398106552s: waiting for machine to come up
	I1105 18:59:37.190792   60910 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:39.191496   60910 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:37.246883   60943 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 18:59:37.246942   60943 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1105 18:59:37.252495   60943 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1105 18:59:37.252524   60943 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1105 18:59:37.279151   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 18:59:37.975741   60943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 18:59:37.975828   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:37.975868   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-929548 minikube.k8s.io/updated_at=2024_11_05T18_59_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=custom-flannel-929548 minikube.k8s.io/primary=true
	I1105 18:59:38.173292   60943 ops.go:34] apiserver oom_adj: -16
	I1105 18:59:38.173446   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:38.673637   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:39.174500   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:39.674328   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:40.173557   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:40.674162   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:41.173804   60943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 18:59:41.312299   60943 kubeadm.go:1113] duration metric: took 3.336528328s to wait for elevateKubeSystemPrivileges
	I1105 18:59:41.312330   60943 kubeadm.go:394] duration metric: took 14.886262316s to StartCluster
	I1105 18:59:41.312349   60943 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:41.312431   60943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:59:41.314196   60943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:41.314450   60943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 18:59:41.314458   60943 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.88 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:59:41.314531   60943 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 18:59:41.314625   60943 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-929548"
	I1105 18:59:41.314645   60943 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-929548"
	I1105 18:59:41.314657   60943 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-929548"
	I1105 18:59:41.314665   60943 config.go:182] Loaded profile config "custom-flannel-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:59:41.314678   60943 host.go:66] Checking if "custom-flannel-929548" exists ...
	I1105 18:59:41.314679   60943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-929548"
	I1105 18:59:41.315198   60943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:41.315246   60943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:41.315205   60943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:41.315322   60943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:41.315936   60943 out.go:177] * Verifying Kubernetes components...
	I1105 18:59:41.317281   60943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:59:41.331533   60943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I1105 18:59:41.331543   60943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I1105 18:59:41.332028   60943 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:41.332580   60943 main.go:141] libmachine: Using API Version  1
	I1105 18:59:41.332602   60943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:41.332636   60943 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:41.332994   60943 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:41.333167   60943 main.go:141] libmachine: Using API Version  1
	I1105 18:59:41.333185   60943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:41.333192   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetState
	I1105 18:59:41.333503   60943 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:41.334096   60943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:41.334129   60943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:41.336871   60943 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-929548"
	I1105 18:59:41.336912   60943 host.go:66] Checking if "custom-flannel-929548" exists ...
	I1105 18:59:41.337184   60943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:41.337212   60943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:41.351033   60943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I1105 18:59:41.351515   60943 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:41.351986   60943 main.go:141] libmachine: Using API Version  1
	I1105 18:59:41.352010   60943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:41.352576   60943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I1105 18:59:41.352842   60943 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:41.353039   60943 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:41.353043   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetState
	I1105 18:59:41.353458   60943 main.go:141] libmachine: Using API Version  1
	I1105 18:59:41.353482   60943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:41.353875   60943 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:41.354412   60943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:41.354452   60943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:41.354765   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:59:41.356441   60943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 18:59:41.357932   60943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:59:41.357953   60943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 18:59:41.357974   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:41.361073   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:41.361539   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:41.361555   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:41.361719   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:41.361889   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:41.362137   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:41.362353   60943 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/id_rsa Username:docker}
	I1105 18:59:41.371967   60943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35327
	I1105 18:59:41.372467   60943 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:41.372892   60943 main.go:141] libmachine: Using API Version  1
	I1105 18:59:41.372922   60943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:41.373720   60943 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:41.373975   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetState
	I1105 18:59:41.375743   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .DriverName
	I1105 18:59:41.375943   60943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 18:59:41.375958   60943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 18:59:41.375977   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHHostname
	I1105 18:59:41.378600   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:41.378986   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:e9:82", ip: ""} in network mk-custom-flannel-929548: {Iface:virbr2 ExpiryTime:2024-11-05 19:59:11 +0000 UTC Type:0 Mac:52:54:00:4f:e9:82 Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:custom-flannel-929548 Clientid:01:52:54:00:4f:e9:82}
	I1105 18:59:41.379018   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | domain custom-flannel-929548 has defined IP address 192.168.50.88 and MAC address 52:54:00:4f:e9:82 in network mk-custom-flannel-929548
	I1105 18:59:41.379229   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHPort
	I1105 18:59:41.379423   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHKeyPath
	I1105 18:59:41.379591   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .GetSSHUsername
	I1105 18:59:41.379753   60943 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/custom-flannel-929548/id_rsa Username:docker}
	I1105 18:59:41.483019   60943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 18:59:41.546752   60943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:59:41.731263   60943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 18:59:41.846927   60943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 18:59:41.965556   60943 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1105 18:59:41.967059   60943 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-929548" to be "Ready" ...
	I1105 18:59:42.088277   60943 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:42.088303   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .Close
	I1105 18:59:42.088649   60943 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:42.088668   60943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:42.088674   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Closing plugin on server side
	I1105 18:59:42.088679   60943 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:42.088699   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .Close
	I1105 18:59:42.088949   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Closing plugin on server side
	I1105 18:59:42.088989   60943 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:42.089008   60943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:42.116509   60943 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:42.116535   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .Close
	I1105 18:59:42.116867   60943 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:42.116892   60943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:42.116914   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Closing plugin on server side
	I1105 18:59:42.442789   60943 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:42.442817   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .Close
	I1105 18:59:42.443147   60943 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:42.443157   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Closing plugin on server side
	I1105 18:59:42.443166   60943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:42.443182   60943 main.go:141] libmachine: Making call to close driver server
	I1105 18:59:42.443189   60943 main.go:141] libmachine: (custom-flannel-929548) Calling .Close
	I1105 18:59:42.443425   60943 main.go:141] libmachine: (custom-flannel-929548) DBG | Closing plugin on server side
	I1105 18:59:42.443426   60943 main.go:141] libmachine: Successfully made call to close driver server
	I1105 18:59:42.443456   60943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 18:59:42.444867   60943 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1105 18:59:40.078042   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:40.078620   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find current IP address of domain enable-default-cni-929548 in network mk-enable-default-cni-929548
	I1105 18:59:40.078649   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | I1105 18:59:40.078571   62953 retry.go:31] will retry after 4.560320674s: waiting for machine to come up
	I1105 18:59:41.691904   60910 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:44.193230   60910 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:42.446134   60943 addons.go:510] duration metric: took 1.131599296s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1105 18:59:42.471363   60943 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-929548" context rescaled to 1 replicas
	I1105 18:59:43.975999   60943 node_ready.go:53] node "custom-flannel-929548" has status "Ready":"False"
	I1105 18:59:46.471038   60943 node_ready.go:53] node "custom-flannel-929548" has status "Ready":"False"
	I1105 18:59:44.640808   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:44.641310   62292 main.go:141] libmachine: (enable-default-cni-929548) Found IP for machine: 192.168.72.73
	I1105 18:59:44.641343   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has current primary IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:44.641351   62292 main.go:141] libmachine: (enable-default-cni-929548) Reserving static IP address...
	I1105 18:59:44.641770   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-929548", mac: "52:54:00:3d:ae:72", ip: "192.168.72.73"} in network mk-enable-default-cni-929548
	I1105 18:59:44.719554   62292 main.go:141] libmachine: (enable-default-cni-929548) Reserved static IP address: 192.168.72.73
	I1105 18:59:44.719586   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Getting to WaitForSSH function...
	I1105 18:59:44.719595   62292 main.go:141] libmachine: (enable-default-cni-929548) Waiting for SSH to be available...
	I1105 18:59:44.722050   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:44.722371   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548
	I1105 18:59:44.722402   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | unable to find defined IP address of network mk-enable-default-cni-929548 interface with MAC address 52:54:00:3d:ae:72
	I1105 18:59:44.722508   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Using SSH client type: external
	I1105 18:59:44.722536   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa (-rw-------)
	I1105 18:59:44.722575   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:59:44.722588   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | About to run SSH command:
	I1105 18:59:44.722604   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | exit 0
	I1105 18:59:44.726114   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | SSH cmd err, output: exit status 255: 
	I1105 18:59:44.726136   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1105 18:59:44.726147   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | command : exit 0
	I1105 18:59:44.726156   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | err     : exit status 255
	I1105 18:59:44.726169   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | output  : 
	I1105 18:59:47.727896   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Getting to WaitForSSH function...
	I1105 18:59:47.731096   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:47.731501   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:47.731528   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:47.731654   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Using SSH client type: external
	I1105 18:59:47.731676   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa (-rw-------)
	I1105 18:59:47.731704   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 18:59:47.731713   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | About to run SSH command:
	I1105 18:59:47.731723   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | exit 0
	I1105 18:59:47.859793   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | SSH cmd err, output: <nil>: 
	I1105 18:59:47.860266   62292 main.go:141] libmachine: (enable-default-cni-929548) KVM machine creation complete!
	I1105 18:59:47.860544   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetConfigRaw
	I1105 18:59:47.861208   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 18:59:47.861412   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 18:59:47.861624   62292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 18:59:47.861641   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetState
	I1105 18:59:47.863153   62292 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 18:59:47.863169   62292 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 18:59:47.863175   62292 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 18:59:47.863184   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:47.865580   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:47.866005   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:47.866035   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:47.866281   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:47.866436   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:47.866584   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:47.866713   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:47.866894   62292 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:47.867136   62292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I1105 18:59:47.867151   62292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 18:59:47.974827   62292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:59:47.974850   62292 main.go:141] libmachine: Detecting the provisioner...
	I1105 18:59:47.974858   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:47.978231   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:47.978697   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:47.978741   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:47.979036   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:47.979208   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:47.979343   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:47.979440   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:47.979616   62292 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:47.979801   62292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I1105 18:59:47.979815   62292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 18:59:48.087384   62292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 18:59:48.087514   62292 main.go:141] libmachine: found compatible host: buildroot
	I1105 18:59:48.087529   62292 main.go:141] libmachine: Provisioning with buildroot...
	I1105 18:59:48.087540   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetMachineName
	I1105 18:59:48.087799   62292 buildroot.go:166] provisioning hostname "enable-default-cni-929548"
	I1105 18:59:48.087830   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetMachineName
	I1105 18:59:48.088002   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:48.090752   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.091108   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:48.091156   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.091331   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:48.091491   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:48.091629   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:48.091740   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:48.091878   62292 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:48.092063   62292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I1105 18:59:48.092079   62292 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-929548 && echo "enable-default-cni-929548" | sudo tee /etc/hostname
	I1105 18:59:48.213476   62292 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-929548
	
	I1105 18:59:48.213507   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:48.217529   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.218041   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:48.218089   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.218452   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:48.218657   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:48.218837   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:48.219012   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:48.219173   62292 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:48.219401   62292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I1105 18:59:48.219424   62292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-929548' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-929548/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-929548' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:59:48.331776   62292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:59:48.331810   62292 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:59:48.331833   62292 buildroot.go:174] setting up certificates
	I1105 18:59:48.331844   62292 provision.go:84] configureAuth start
	I1105 18:59:48.331856   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetMachineName
	I1105 18:59:48.332155   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetIP
	I1105 18:59:48.334756   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.335238   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:48.335284   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.335469   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:48.337648   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.337962   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:48.338000   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.338231   62292 provision.go:143] copyHostCerts
	I1105 18:59:48.338295   62292 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:59:48.338313   62292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:59:48.338364   62292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:59:48.338459   62292 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:59:48.338466   62292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:59:48.338488   62292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:59:48.338551   62292 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:59:48.338558   62292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:59:48.338574   62292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:59:48.338630   62292 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-929548 san=[127.0.0.1 192.168.72.73 enable-default-cni-929548 localhost minikube]
	I1105 18:59:48.489205   62292 provision.go:177] copyRemoteCerts
	I1105 18:59:48.489266   62292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:59:48.489297   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:48.491986   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.492382   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:48.492411   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.492591   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:48.492800   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:48.492941   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:48.493056   62292 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa Username:docker}
	I1105 18:59:48.572869   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:59:48.602916   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1105 18:59:48.633893   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:59:48.661131   62292 provision.go:87] duration metric: took 329.274163ms to configureAuth
	I1105 18:59:48.661163   62292 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:59:48.661366   62292 config.go:182] Loaded profile config "enable-default-cni-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:59:48.661456   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:48.664279   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.664645   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:48.664681   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.664803   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:48.664965   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:48.665135   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:48.665249   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:48.665387   62292 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:48.665553   62292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I1105 18:59:48.665566   62292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:59:49.151640   62635 start.go:364] duration metric: took 58.335354987s to acquireMachinesLock for "kubernetes-upgrade-906991"
	I1105 18:59:49.151696   62635 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:59:49.151709   62635 fix.go:54] fixHost starting: 
	I1105 18:59:49.152159   62635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:59:49.152215   62635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:59:49.170781   62635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I1105 18:59:49.171311   62635 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:59:49.171808   62635 main.go:141] libmachine: Using API Version  1
	I1105 18:59:49.171832   62635 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:59:49.172226   62635 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:59:49.172417   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:59:49.172580   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetState
	I1105 18:59:49.175235   62635 fix.go:112] recreateIfNeeded on kubernetes-upgrade-906991: state=Running err=<nil>
	W1105 18:59:49.175264   62635 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:59:49.176858   62635 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-906991" VM ...
	I1105 18:59:46.690465   60910 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:48.201156   60910 pod_ready.go:93] pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace has status "Ready":"True"
	I1105 18:59:48.201183   60910 pod_ready.go:82] duration metric: took 19.517166119s for pod "calico-kube-controllers-d4dc4cc65-qgs4l" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.201197   60910 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-tr2nf" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.208851   60910 pod_ready.go:93] pod "calico-node-tr2nf" in "kube-system" namespace has status "Ready":"True"
	I1105 18:59:48.208876   60910 pod_ready.go:82] duration metric: took 7.670469ms for pod "calico-node-tr2nf" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.208889   60910 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-rcfl4" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.218053   60910 pod_ready.go:93] pod "coredns-7c65d6cfc9-rcfl4" in "kube-system" namespace has status "Ready":"True"
	I1105 18:59:48.218073   60910 pod_ready.go:82] duration metric: took 9.174725ms for pod "coredns-7c65d6cfc9-rcfl4" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.218087   60910 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-929548" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.223533   60910 pod_ready.go:93] pod "etcd-calico-929548" in "kube-system" namespace has status "Ready":"True"
	I1105 18:59:48.223556   60910 pod_ready.go:82] duration metric: took 5.460289ms for pod "etcd-calico-929548" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.223567   60910 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-929548" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.228754   60910 pod_ready.go:93] pod "kube-apiserver-calico-929548" in "kube-system" namespace has status "Ready":"True"
	I1105 18:59:48.228779   60910 pod_ready.go:82] duration metric: took 5.202987ms for pod "kube-apiserver-calico-929548" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.228790   60910 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-929548" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.590791   60910 pod_ready.go:93] pod "kube-controller-manager-calico-929548" in "kube-system" namespace has status "Ready":"True"
	I1105 18:59:48.590818   60910 pod_ready.go:82] duration metric: took 362.01894ms for pod "kube-controller-manager-calico-929548" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.590832   60910 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-wwprt" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.988444   60910 pod_ready.go:93] pod "kube-proxy-wwprt" in "kube-system" namespace has status "Ready":"True"
	I1105 18:59:48.988468   60910 pod_ready.go:82] duration metric: took 397.629575ms for pod "kube-proxy-wwprt" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:48.988478   60910 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-929548" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:49.389078   60910 pod_ready.go:93] pod "kube-scheduler-calico-929548" in "kube-system" namespace has status "Ready":"True"
	I1105 18:59:49.389106   60910 pod_ready.go:82] duration metric: took 400.621069ms for pod "kube-scheduler-calico-929548" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:49.389120   60910 pod_ready.go:39] duration metric: took 20.719486318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:59:49.389137   60910 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:59:49.389196   60910 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:59:49.405298   60910 api_server.go:72] duration metric: took 32.740540164s to wait for apiserver process to appear ...
	I1105 18:59:49.405330   60910 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:59:49.405355   60910 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I1105 18:59:49.412576   60910 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I1105 18:59:49.414772   60910 api_server.go:141] control plane version: v1.31.2
	I1105 18:59:49.414804   60910 api_server.go:131] duration metric: took 9.465934ms to wait for apiserver health ...
	I1105 18:59:49.414815   60910 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:59:49.593733   60910 system_pods.go:59] 9 kube-system pods found
	I1105 18:59:49.593770   60910 system_pods.go:61] "calico-kube-controllers-d4dc4cc65-qgs4l" [a170ba3c-5d0e-4684-8980-e9efc1395941] Running
	I1105 18:59:49.593778   60910 system_pods.go:61] "calico-node-tr2nf" [5c1de0c0-d990-4a6d-83cd-18b0d0cd9b83] Running
	I1105 18:59:49.593785   60910 system_pods.go:61] "coredns-7c65d6cfc9-rcfl4" [8ba13dd3-38f9-4306-aaf0-341b473c72a5] Running
	I1105 18:59:49.593790   60910 system_pods.go:61] "etcd-calico-929548" [09e0b591-7df2-4b91-958d-ca920153fbd9] Running
	I1105 18:59:49.593796   60910 system_pods.go:61] "kube-apiserver-calico-929548" [62a6ba36-3892-47d6-a4b4-7af6611a9592] Running
	I1105 18:59:49.593802   60910 system_pods.go:61] "kube-controller-manager-calico-929548" [af3cbda1-4ed2-4003-a6d0-17a60af10022] Running
	I1105 18:59:49.593806   60910 system_pods.go:61] "kube-proxy-wwprt" [96b6be8b-de61-4b67-ada0-2a220bec4833] Running
	I1105 18:59:49.593811   60910 system_pods.go:61] "kube-scheduler-calico-929548" [0bf7220d-967f-4f57-ac6f-68914791c9e0] Running
	I1105 18:59:49.593816   60910 system_pods.go:61] "storage-provisioner" [80ad1438-9c49-409f-96ea-dd26ab65dabc] Running
	I1105 18:59:49.593824   60910 system_pods.go:74] duration metric: took 179.001701ms to wait for pod list to return data ...
	I1105 18:59:49.593837   60910 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:59:49.788278   60910 default_sa.go:45] found service account: "default"
	I1105 18:59:49.788313   60910 default_sa.go:55] duration metric: took 194.469334ms for default service account to be created ...
	I1105 18:59:49.788325   60910 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:59:49.992000   60910 system_pods.go:86] 9 kube-system pods found
	I1105 18:59:49.992029   60910 system_pods.go:89] "calico-kube-controllers-d4dc4cc65-qgs4l" [a170ba3c-5d0e-4684-8980-e9efc1395941] Running
	I1105 18:59:49.992040   60910 system_pods.go:89] "calico-node-tr2nf" [5c1de0c0-d990-4a6d-83cd-18b0d0cd9b83] Running
	I1105 18:59:49.992045   60910 system_pods.go:89] "coredns-7c65d6cfc9-rcfl4" [8ba13dd3-38f9-4306-aaf0-341b473c72a5] Running
	I1105 18:59:49.992049   60910 system_pods.go:89] "etcd-calico-929548" [09e0b591-7df2-4b91-958d-ca920153fbd9] Running
	I1105 18:59:49.992053   60910 system_pods.go:89] "kube-apiserver-calico-929548" [62a6ba36-3892-47d6-a4b4-7af6611a9592] Running
	I1105 18:59:49.992057   60910 system_pods.go:89] "kube-controller-manager-calico-929548" [af3cbda1-4ed2-4003-a6d0-17a60af10022] Running
	I1105 18:59:49.992060   60910 system_pods.go:89] "kube-proxy-wwprt" [96b6be8b-de61-4b67-ada0-2a220bec4833] Running
	I1105 18:59:49.992064   60910 system_pods.go:89] "kube-scheduler-calico-929548" [0bf7220d-967f-4f57-ac6f-68914791c9e0] Running
	I1105 18:59:49.992068   60910 system_pods.go:89] "storage-provisioner" [80ad1438-9c49-409f-96ea-dd26ab65dabc] Running
	I1105 18:59:49.992076   60910 system_pods.go:126] duration metric: took 203.744393ms to wait for k8s-apps to be running ...
	I1105 18:59:49.992085   60910 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:59:49.992129   60910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:59:50.006283   60910 system_svc.go:56] duration metric: took 14.186611ms WaitForService to wait for kubelet
	I1105 18:59:50.006317   60910 kubeadm.go:582] duration metric: took 33.341562752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:59:50.006340   60910 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:59:50.189384   60910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:59:50.189407   60910 node_conditions.go:123] node cpu capacity is 2
	I1105 18:59:50.189418   60910 node_conditions.go:105] duration metric: took 183.073399ms to run NodePressure ...
	I1105 18:59:50.189429   60910 start.go:241] waiting for startup goroutines ...
	I1105 18:59:50.189435   60910 start.go:246] waiting for cluster config update ...
	I1105 18:59:50.189444   60910 start.go:255] writing updated cluster config ...
	I1105 18:59:50.189766   60910 ssh_runner.go:195] Run: rm -f paused
	I1105 18:59:50.245560   60910 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 18:59:50.248567   60910 out.go:177] * Done! kubectl is now configured to use "calico-929548" cluster and "default" namespace by default
	I1105 18:59:48.907154   62292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:59:48.907185   62292 main.go:141] libmachine: Checking connection to Docker...
	I1105 18:59:48.907197   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetURL
	I1105 18:59:48.908527   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Using libvirt version 6000000
	I1105 18:59:48.911382   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.911799   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:48.911830   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.911998   62292 main.go:141] libmachine: Docker is up and running!
	I1105 18:59:48.912016   62292 main.go:141] libmachine: Reticulating splines...
	I1105 18:59:48.912022   62292 client.go:171] duration metric: took 29.331014971s to LocalClient.Create
	I1105 18:59:48.912043   62292 start.go:167] duration metric: took 29.331078933s to libmachine.API.Create "enable-default-cni-929548"
	I1105 18:59:48.912053   62292 start.go:293] postStartSetup for "enable-default-cni-929548" (driver="kvm2")
	I1105 18:59:48.912062   62292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:59:48.912078   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 18:59:48.912273   62292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:59:48.912292   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:48.914672   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.915050   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:48.915078   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:48.915246   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:48.915422   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:48.915615   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:48.915740   62292 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa Username:docker}
	I1105 18:59:49.001078   62292 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:59:49.005183   62292 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:59:49.005207   62292 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:59:49.005261   62292 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:59:49.005343   62292 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:59:49.005455   62292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:59:49.014355   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:59:49.036867   62292 start.go:296] duration metric: took 124.801425ms for postStartSetup
	I1105 18:59:49.036910   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetConfigRaw
	I1105 18:59:49.037524   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetIP
	I1105 18:59:49.040363   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.040819   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:49.040848   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.041136   62292 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/config.json ...
	I1105 18:59:49.041384   62292 start.go:128] duration metric: took 29.481444765s to createHost
	I1105 18:59:49.041417   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:49.044002   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.044306   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:49.044335   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.044460   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:49.044626   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:49.044819   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:49.044938   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:49.045121   62292 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:49.045286   62292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I1105 18:59:49.045296   62292 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:59:49.151450   62292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833189.133068742
	
	I1105 18:59:49.151476   62292 fix.go:216] guest clock: 1730833189.133068742
	I1105 18:59:49.151485   62292 fix.go:229] Guest: 2024-11-05 18:59:49.133068742 +0000 UTC Remote: 2024-11-05 18:59:49.041401273 +0000 UTC m=+75.253773072 (delta=91.667469ms)
	I1105 18:59:49.151525   62292 fix.go:200] guest clock delta is within tolerance: 91.667469ms
	I1105 18:59:49.151536   62292 start.go:83] releasing machines lock for "enable-default-cni-929548", held for 29.591760623s
	I1105 18:59:49.151568   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 18:59:49.151854   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetIP
	I1105 18:59:49.154754   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.155240   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:49.155283   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.155465   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 18:59:49.155993   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 18:59:49.156146   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 18:59:49.156253   62292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:59:49.156291   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:49.156309   62292 ssh_runner.go:195] Run: cat /version.json
	I1105 18:59:49.156328   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 18:59:49.158922   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.159207   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.159250   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:49.159275   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.159396   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:49.159580   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:49.159677   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:49.159708   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:49.159906   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 18:59:49.159923   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:49.160091   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 18:59:49.160089   62292 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa Username:docker}
	I1105 18:59:49.160239   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 18:59:49.160396   62292 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa Username:docker}
	I1105 18:59:49.235850   62292 ssh_runner.go:195] Run: systemctl --version
	I1105 18:59:49.269948   62292 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:59:49.431953   62292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:59:49.440065   62292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:59:49.440132   62292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:59:49.459655   62292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 18:59:49.459687   62292 start.go:495] detecting cgroup driver to use...
	I1105 18:59:49.459770   62292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:59:49.479271   62292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:59:49.496448   62292 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:59:49.496523   62292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:59:49.512800   62292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:59:49.527785   62292 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:59:49.659682   62292 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:59:49.819298   62292 docker.go:233] disabling docker service ...
	I1105 18:59:49.819378   62292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:59:49.836227   62292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:59:49.849390   62292 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:59:50.017260   62292 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:59:50.161299   62292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:59:50.174816   62292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:59:50.192648   62292 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:59:50.192695   62292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:50.204107   62292 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:59:50.204156   62292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:50.215072   62292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:50.225376   62292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:50.239612   62292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:59:50.251055   62292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:50.261006   62292 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:50.278753   62292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:59:50.288222   62292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:59:50.296794   62292 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 18:59:50.296853   62292 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 18:59:50.308594   62292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:59:50.318696   62292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:59:50.447935   62292 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:59:50.537028   62292 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:59:50.537106   62292 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:59:50.541794   62292 start.go:563] Will wait 60s for crictl version
	I1105 18:59:50.541844   62292 ssh_runner.go:195] Run: which crictl
	I1105 18:59:50.545392   62292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:59:50.582292   62292 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:59:50.582360   62292 ssh_runner.go:195] Run: crio --version
	I1105 18:59:50.611911   62292 ssh_runner.go:195] Run: crio --version
	I1105 18:59:50.641061   62292 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:59:49.178169   62635 machine.go:93] provisionDockerMachine start ...
	I1105 18:59:49.178209   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:59:49.178443   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:49.181125   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.181517   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:49.181544   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.181692   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:59:49.181846   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:49.181999   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:49.182125   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:59:49.182312   62635 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:49.182548   62635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:59:49.182564   62635 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:59:49.297573   62635 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-906991
	
	I1105 18:59:49.297614   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetMachineName
	I1105 18:59:49.297858   62635 buildroot.go:166] provisioning hostname "kubernetes-upgrade-906991"
	I1105 18:59:49.297884   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetMachineName
	I1105 18:59:49.298066   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:49.301313   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.301695   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:49.301755   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.301996   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:59:49.302199   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:49.302340   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:49.302512   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:59:49.302695   62635 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:49.302939   62635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:59:49.302959   62635 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-906991 && echo "kubernetes-upgrade-906991" | sudo tee /etc/hostname
	I1105 18:59:49.432188   62635 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-906991
	
	I1105 18:59:49.432243   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:49.435643   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.436060   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:49.436089   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.436287   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:59:49.436472   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:49.436603   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:49.436741   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:59:49.436924   62635 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:49.437131   62635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:59:49.437148   62635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-906991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-906991/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-906991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:59:49.564535   62635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:59:49.564564   62635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:59:49.564601   62635 buildroot.go:174] setting up certificates
	I1105 18:59:49.564615   62635 provision.go:84] configureAuth start
	I1105 18:59:49.564627   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetMachineName
	I1105 18:59:49.564901   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetIP
	I1105 18:59:49.568347   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.568913   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:49.568957   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.569308   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:49.572059   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.572495   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:49.572528   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.572846   62635 provision.go:143] copyHostCerts
	I1105 18:59:49.572917   62635 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:59:49.572931   62635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:59:49.572999   62635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:59:49.573210   62635 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:59:49.573226   62635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:59:49.573260   62635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:59:49.573363   62635 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:59:49.573375   62635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:59:49.573404   62635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:59:49.573497   62635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-906991 san=[127.0.0.1 192.168.61.130 kubernetes-upgrade-906991 localhost minikube]
	I1105 18:59:49.718952   62635 provision.go:177] copyRemoteCerts
	I1105 18:59:49.719079   62635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:59:49.719110   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:49.721813   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.722214   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:49.722242   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.722405   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:59:49.722581   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:49.722718   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:59:49.722826   62635 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa Username:docker}
	I1105 18:59:49.810599   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:59:49.838825   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1105 18:59:49.866744   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:59:49.894513   62635 provision.go:87] duration metric: took 329.883985ms to configureAuth
	I1105 18:59:49.894546   62635 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:59:49.894769   62635 config.go:182] Loaded profile config "kubernetes-upgrade-906991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:59:49.894852   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:49.897777   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.898243   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:49.898294   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:49.898470   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:59:49.898666   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:49.898833   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:49.898996   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:59:49.899169   62635 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:49.899389   62635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:59:49.899415   62635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:59:48.972347   60943 node_ready.go:53] node "custom-flannel-929548" has status "Ready":"False"
	I1105 18:59:49.471846   60943 node_ready.go:49] node "custom-flannel-929548" has status "Ready":"True"
	I1105 18:59:49.471886   60943 node_ready.go:38] duration metric: took 7.504787168s for node "custom-flannel-929548" to be "Ready" ...
	I1105 18:59:49.471899   60943 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:59:49.481388   60943 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-4zvvx" in "kube-system" namespace to be "Ready" ...
	I1105 18:59:51.489670   60943 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zvvx" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:50.642350   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetIP
	I1105 18:59:50.645111   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:50.645489   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 18:59:50.645522   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 18:59:50.645749   62292 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1105 18:59:50.649594   62292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:59:50.661439   62292 kubeadm.go:883] updating cluster {Name:enable-default-cni-929548 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:enable-default-cni-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:59:50.661543   62292 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:59:50.661581   62292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:59:50.691784   62292 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 18:59:50.691850   62292 ssh_runner.go:195] Run: which lz4
	I1105 18:59:50.695689   62292 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 18:59:50.699735   62292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 18:59:50.699783   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 18:59:51.994097   62292 crio.go:462] duration metric: took 1.298449621s to copy over tarball
	I1105 18:59:51.994192   62292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 18:59:54.256870   62292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.262643728s)
	I1105 18:59:54.256898   62292 crio.go:469] duration metric: took 2.262764696s to extract the tarball
	I1105 18:59:54.256904   62292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 18:59:54.293480   62292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:59:54.332014   62292 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:59:54.332042   62292 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:59:54.332053   62292 kubeadm.go:934] updating node { 192.168.72.73 8443 v1.31.2 crio true true} ...
	I1105 18:59:54.332167   62292 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-929548 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1105 18:59:54.332249   62292 ssh_runner.go:195] Run: crio config
	I1105 18:59:54.381002   62292 cni.go:84] Creating CNI manager for "bridge"
	I1105 18:59:54.381032   62292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:59:54.381061   62292 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.73 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-929548 NodeName:enable-default-cni-929548 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:59:54.381194   62292 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-929548"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.73"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.73"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:59:54.381258   62292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:59:54.393108   62292 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:59:54.393189   62292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 18:59:54.402537   62292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1105 18:59:54.418202   62292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:59:54.434423   62292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I1105 18:59:54.450727   62292 ssh_runner.go:195] Run: grep 192.168.72.73	control-plane.minikube.internal$ /etc/hosts
	I1105 18:59:54.454384   62292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:59:54.465827   62292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:59:54.584893   62292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:59:54.603252   62292 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548 for IP: 192.168.72.73
	I1105 18:59:54.603282   62292 certs.go:194] generating shared ca certs ...
	I1105 18:59:54.603306   62292 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:54.603493   62292 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:59:54.603547   62292 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:59:54.603559   62292 certs.go:256] generating profile certs ...
	I1105 18:59:54.603630   62292 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.key
	I1105 18:59:54.603649   62292 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt with IP's: []
	I1105 18:59:54.799674   62292 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt ...
	I1105 18:59:54.799702   62292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: {Name:mk8243d41a362fdcd98455f7ee163dd0101fb2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:54.799877   62292 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.key ...
	I1105 18:59:54.799889   62292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.key: {Name:mkb0396f03162b0a4b1a22458d5c3c56421d5cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:54.799965   62292 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.key.60e6e1ae
	I1105 18:59:54.799985   62292 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.crt.60e6e1ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.73]
	I1105 18:59:54.862716   62292 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.crt.60e6e1ae ...
	I1105 18:59:54.862747   62292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.crt.60e6e1ae: {Name:mk37a9ce164eff88cb932a2d491d321a01766358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:54.862895   62292 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.key.60e6e1ae ...
	I1105 18:59:54.862907   62292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.key.60e6e1ae: {Name:mk8d289215e0488709c9b223cf6c0da3a2fef177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:54.862995   62292 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.crt.60e6e1ae -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.crt
	I1105 18:59:54.863105   62292 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.key.60e6e1ae -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.key
	I1105 18:59:54.863166   62292 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/proxy-client.key
	I1105 18:59:54.863182   62292 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/proxy-client.crt with IP's: []
	I1105 18:59:55.417568   62292 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/proxy-client.crt ...
	I1105 18:59:55.417597   62292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/proxy-client.crt: {Name:mkaa237f408dab3dfe52a50c06121327bea5e9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:55.417758   62292 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/proxy-client.key ...
	I1105 18:59:55.417770   62292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/proxy-client.key: {Name:mk70ca1a5e34b9f475ec0c90e9583ecaf3a52765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:59:55.417940   62292 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:59:55.417979   62292 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:59:55.417988   62292 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:59:55.418013   62292 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:59:55.418038   62292 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:59:55.418055   62292 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:59:55.418088   62292 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:59:55.418629   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:59:55.447555   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:59:55.484689   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:59:55.512020   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:59:55.540346   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 18:59:55.563515   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:59:55.586934   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:59:55.610495   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 18:59:55.633511   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:59:55.672445   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:59:55.697269   62292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:59:55.720696   62292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:59:55.737238   62292 ssh_runner.go:195] Run: openssl version
	I1105 18:59:55.743610   62292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:59:55.755207   62292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:59:55.759762   62292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:59:55.759830   62292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:59:55.767412   62292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:59:55.779142   62292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:59:55.789973   62292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:59:55.794676   62292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:59:55.794741   62292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:59:55.801307   62292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:59:55.812598   62292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:59:55.822644   62292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:59:55.827141   62292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:59:55.827201   62292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:59:55.832518   62292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:59:55.842376   62292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:59:55.846531   62292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:59:55.846594   62292 kubeadm.go:392] StartCluster: {Name:enable-default-cni-929548 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:enable-default-cni-929548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:59:55.846707   62292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:59:55.846789   62292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:59:55.883132   62292 cri.go:89] found id: ""
	I1105 18:59:55.883200   62292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:59:55.893856   62292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 18:59:55.903692   62292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 18:59:55.913099   62292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 18:59:55.913120   62292 kubeadm.go:157] found existing configuration files:
	
	I1105 18:59:55.913173   62292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 18:59:55.921773   62292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 18:59:55.921822   62292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 18:59:55.930934   62292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 18:59:55.941323   62292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 18:59:55.941389   62292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 18:59:55.950691   62292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 18:59:55.959579   62292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 18:59:55.959638   62292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 18:59:55.969495   62292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 18:59:55.978433   62292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 18:59:55.978498   62292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 18:59:55.987334   62292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 18:59:56.043325   62292 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 18:59:56.043413   62292 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 18:59:56.148829   62292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 18:59:56.149006   62292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 18:59:56.149152   62292 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 18:59:56.157600   62292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 18:59:53.987338   60943 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zvvx" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:56.132829   60943 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zvvx" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:56.343037   62292 out.go:235]   - Generating certificates and keys ...
	I1105 18:59:56.343155   62292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 18:59:56.343224   62292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 18:59:56.343310   62292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 18:59:56.481827   62292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 18:59:56.715547   62292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 18:59:57.169047   62292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 18:59:57.319605   62292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 18:59:57.319804   62292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-929548 localhost] and IPs [192.168.72.73 127.0.0.1 ::1]
	I1105 18:59:57.506273   62292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 18:59:57.506458   62292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-929548 localhost] and IPs [192.168.72.73 127.0.0.1 ::1]
	I1105 18:59:58.124490   62292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 18:59:58.629310   62292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 18:59:58.673074   62292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 18:59:58.673368   62292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 18:59:58.779764   62292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 18:59:59.079767   62292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 18:59:59.212907   62292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 18:59:59.362042   62292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 18:59:59.507669   62292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 18:59:59.508303   62292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 18:59:59.510655   62292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 18:59:58.050002   62635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:59:58.050029   62635 machine.go:96] duration metric: took 8.871831282s to provisionDockerMachine
	I1105 18:59:58.050040   62635 start.go:293] postStartSetup for "kubernetes-upgrade-906991" (driver="kvm2")
	I1105 18:59:58.050050   62635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:59:58.050065   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:59:58.050377   62635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:59:58.050412   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:58.053497   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.053921   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:58.053952   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.054114   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:59:58.054288   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:58.054481   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:59:58.054663   62635 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa Username:docker}
	I1105 18:59:58.144364   62635 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:59:58.149638   62635 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:59:58.149667   62635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:59:58.149738   62635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:59:58.149837   62635 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:59:58.149959   62635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:59:58.162442   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:59:58.192905   62635 start.go:296] duration metric: took 142.849914ms for postStartSetup
	I1105 18:59:58.192947   62635 fix.go:56] duration metric: took 9.041238155s for fixHost
	I1105 18:59:58.192969   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:58.195923   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.196330   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:58.196374   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.196647   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:59:58.196839   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:58.196992   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:58.197128   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:59:58.197297   62635 main.go:141] libmachine: Using SSH client type: native
	I1105 18:59:58.197478   62635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.130 22 <nil> <nil>}
	I1105 18:59:58.197491   62635 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:59:58.315616   62635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833198.306990680
	
	I1105 18:59:58.315642   62635 fix.go:216] guest clock: 1730833198.306990680
	I1105 18:59:58.315652   62635 fix.go:229] Guest: 2024-11-05 18:59:58.30699068 +0000 UTC Remote: 2024-11-05 18:59:58.192951782 +0000 UTC m=+67.522743287 (delta=114.038898ms)
	I1105 18:59:58.315685   62635 fix.go:200] guest clock delta is within tolerance: 114.038898ms
	I1105 18:59:58.315693   62635 start.go:83] releasing machines lock for "kubernetes-upgrade-906991", held for 9.164019278s
	I1105 18:59:58.315720   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:59:58.315971   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetIP
	I1105 18:59:58.318949   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.319323   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:58.319350   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.319485   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:59:58.319983   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:59:58.320160   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:59:58.320270   62635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:59:58.320314   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:58.320340   62635 ssh_runner.go:195] Run: cat /version.json
	I1105 18:59:58.320355   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHHostname
	I1105 18:59:58.322885   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.323021   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.323300   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:58.323337   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 18:59:58.323383   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.323399   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 18:59:58.323553   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:59:58.323710   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHPort
	I1105 18:59:58.323731   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:58.323921   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHKeyPath
	I1105 18:59:58.323942   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:59:58.324065   62635 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa Username:docker}
	I1105 18:59:58.324084   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetSSHUsername
	I1105 18:59:58.324215   62635 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/kubernetes-upgrade-906991/id_rsa Username:docker}
	I1105 18:59:58.430753   62635 ssh_runner.go:195] Run: systemctl --version
	I1105 18:59:58.438320   62635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:59:58.600870   62635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:59:58.624282   62635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:59:58.624365   62635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:59:58.657877   62635 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:59:58.657907   62635 start.go:495] detecting cgroup driver to use...
	I1105 18:59:58.657995   62635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:59:58.692250   62635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:59:58.741099   62635 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:59:58.741165   62635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:59:58.870306   62635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:59:59.111268   62635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:59:59.575763   62635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:00:00.000337   62635 docker.go:233] disabling docker service ...
	I1105 19:00:00.000409   62635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:00:00.105044   62635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:00:00.163459   62635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:00:00.409802   62635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:00:00.691819   62635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:59:58.488851   60943 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zvvx" in "kube-system" namespace has status "Ready":"False"
	I1105 19:00:00.489699   60943 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zvvx" in "kube-system" namespace has status "Ready":"False"
	I1105 18:59:59.512502   62292 out.go:235]   - Booting up control plane ...
	I1105 18:59:59.512633   62292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 18:59:59.512739   62292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 18:59:59.512821   62292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 18:59:59.530119   62292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 18:59:59.538964   62292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 18:59:59.539259   62292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 18:59:59.702126   62292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 18:59:59.702281   62292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:00:00.204148   62292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.042675ms
	I1105 19:00:00.204293   62292 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:00:02.990133   60943 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zvvx" in "kube-system" namespace has status "Ready":"False"
	I1105 19:00:03.990705   60943 pod_ready.go:93] pod "coredns-7c65d6cfc9-4zvvx" in "kube-system" namespace has status "Ready":"True"
	I1105 19:00:03.990738   60943 pod_ready.go:82] duration metric: took 14.509321456s for pod "coredns-7c65d6cfc9-4zvvx" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:03.990751   60943 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-929548" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:03.997976   60943 pod_ready.go:93] pod "etcd-custom-flannel-929548" in "kube-system" namespace has status "Ready":"True"
	I1105 19:00:03.998006   60943 pod_ready.go:82] duration metric: took 7.246139ms for pod "etcd-custom-flannel-929548" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:03.998019   60943 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-929548" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:04.005648   60943 pod_ready.go:93] pod "kube-apiserver-custom-flannel-929548" in "kube-system" namespace has status "Ready":"True"
	I1105 19:00:04.005673   60943 pod_ready.go:82] duration metric: took 7.64544ms for pod "kube-apiserver-custom-flannel-929548" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:04.005686   60943 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-929548" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:04.012777   60943 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-929548" in "kube-system" namespace has status "Ready":"True"
	I1105 19:00:04.012805   60943 pod_ready.go:82] duration metric: took 7.109695ms for pod "kube-controller-manager-custom-flannel-929548" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:04.012818   60943 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-q8dkf" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:04.019415   60943 pod_ready.go:93] pod "kube-proxy-q8dkf" in "kube-system" namespace has status "Ready":"True"
	I1105 19:00:04.019443   60943 pod_ready.go:82] duration metric: took 6.616728ms for pod "kube-proxy-q8dkf" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:04.019457   60943 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-929548" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:04.386676   60943 pod_ready.go:93] pod "kube-scheduler-custom-flannel-929548" in "kube-system" namespace has status "Ready":"True"
	I1105 19:00:04.386699   60943 pod_ready.go:82] duration metric: took 367.23382ms for pod "kube-scheduler-custom-flannel-929548" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:04.386710   60943 pod_ready.go:39] duration metric: took 14.914798889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:00:04.386722   60943 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:00:04.386775   60943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:00:04.401418   60943 api_server.go:72] duration metric: took 23.086927347s to wait for apiserver process to appear ...
	I1105 19:00:04.401445   60943 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:00:04.401469   60943 api_server.go:253] Checking apiserver healthz at https://192.168.50.88:8443/healthz ...
	I1105 19:00:04.406860   60943 api_server.go:279] https://192.168.50.88:8443/healthz returned 200:
	ok
	I1105 19:00:04.407971   60943 api_server.go:141] control plane version: v1.31.2
	I1105 19:00:04.407996   60943 api_server.go:131] duration metric: took 6.543619ms to wait for apiserver health ...
	I1105 19:00:04.408003   60943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:00:04.590966   60943 system_pods.go:59] 7 kube-system pods found
	I1105 19:00:04.591014   60943 system_pods.go:61] "coredns-7c65d6cfc9-4zvvx" [444b6234-b7eb-4ab1-ac3a-f6316341532c] Running
	I1105 19:00:04.591022   60943 system_pods.go:61] "etcd-custom-flannel-929548" [d80dd81f-609c-4096-b5ec-6a08f5919f57] Running
	I1105 19:00:04.591027   60943 system_pods.go:61] "kube-apiserver-custom-flannel-929548" [29db0c28-cf3e-4701-9e49-1496afa45dda] Running
	I1105 19:00:04.591032   60943 system_pods.go:61] "kube-controller-manager-custom-flannel-929548" [a7ca7774-9e15-41ae-8f85-e61a63e201cd] Running
	I1105 19:00:04.591037   60943 system_pods.go:61] "kube-proxy-q8dkf" [8b9c5a86-4543-4f99-ad00-ee465d70d6a3] Running
	I1105 19:00:04.591041   60943 system_pods.go:61] "kube-scheduler-custom-flannel-929548" [537b494c-ded7-4bce-828f-7ad168e4814b] Running
	I1105 19:00:04.591046   60943 system_pods.go:61] "storage-provisioner" [c472ba55-6552-43e6-a20b-fac4d73f90b4] Running
	I1105 19:00:04.591053   60943 system_pods.go:74] duration metric: took 183.044583ms to wait for pod list to return data ...
	I1105 19:00:04.591064   60943 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:00:04.786387   60943 default_sa.go:45] found service account: "default"
	I1105 19:00:04.786410   60943 default_sa.go:55] duration metric: took 195.34154ms for default service account to be created ...
	I1105 19:00:04.786419   60943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:00:04.988509   60943 system_pods.go:86] 7 kube-system pods found
	I1105 19:00:04.988535   60943 system_pods.go:89] "coredns-7c65d6cfc9-4zvvx" [444b6234-b7eb-4ab1-ac3a-f6316341532c] Running
	I1105 19:00:04.988540   60943 system_pods.go:89] "etcd-custom-flannel-929548" [d80dd81f-609c-4096-b5ec-6a08f5919f57] Running
	I1105 19:00:04.988545   60943 system_pods.go:89] "kube-apiserver-custom-flannel-929548" [29db0c28-cf3e-4701-9e49-1496afa45dda] Running
	I1105 19:00:04.988549   60943 system_pods.go:89] "kube-controller-manager-custom-flannel-929548" [a7ca7774-9e15-41ae-8f85-e61a63e201cd] Running
	I1105 19:00:04.988553   60943 system_pods.go:89] "kube-proxy-q8dkf" [8b9c5a86-4543-4f99-ad00-ee465d70d6a3] Running
	I1105 19:00:04.988556   60943 system_pods.go:89] "kube-scheduler-custom-flannel-929548" [537b494c-ded7-4bce-828f-7ad168e4814b] Running
	I1105 19:00:04.988561   60943 system_pods.go:89] "storage-provisioner" [c472ba55-6552-43e6-a20b-fac4d73f90b4] Running
	I1105 19:00:04.988568   60943 system_pods.go:126] duration metric: took 202.144554ms to wait for k8s-apps to be running ...
	I1105 19:00:04.988574   60943 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:00:04.988615   60943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:00:05.007768   60943 system_svc.go:56] duration metric: took 19.182757ms WaitForService to wait for kubelet
	I1105 19:00:05.007797   60943 kubeadm.go:582] duration metric: took 23.693311696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:00:05.007821   60943 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:00:05.187100   60943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:00:05.187136   60943 node_conditions.go:123] node cpu capacity is 2
	I1105 19:00:05.187151   60943 node_conditions.go:105] duration metric: took 179.324471ms to run NodePressure ...
	I1105 19:00:05.187165   60943 start.go:241] waiting for startup goroutines ...
	I1105 19:00:05.187174   60943 start.go:246] waiting for cluster config update ...
	I1105 19:00:05.187187   60943 start.go:255] writing updated cluster config ...
	I1105 19:00:05.187475   60943 ssh_runner.go:195] Run: rm -f paused
	I1105 19:00:05.244318   60943 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:00:05.246061   60943 out.go:177] * Done! kubectl is now configured to use "custom-flannel-929548" cluster and "default" namespace by default
	I1105 19:00:00.734442   62635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:00:00.762878   62635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:00:00.762964   62635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:00:00.781251   62635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:00:00.781329   62635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:00:00.802782   62635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:00:00.829239   62635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:00:00.860004   62635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:00:00.892336   62635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:00:00.933099   62635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:00:00.966605   62635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:00:00.988003   62635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:00:01.007965   62635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:00:01.023683   62635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:00:01.236353   62635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:00:06.205214   62292 kubeadm.go:310] [api-check] The API server is healthy after 6.003481141s
	I1105 19:00:06.226192   62292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:00:06.253876   62292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:00:06.280014   62292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:00:06.280252   62292 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-929548 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:00:06.292880   62292 kubeadm.go:310] [bootstrap-token] Using token: a8kban.ldgm3pcghf7vwr8w
	I1105 19:00:06.294520   62292 out.go:235]   - Configuring RBAC rules ...
	I1105 19:00:06.294685   62292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:00:06.302272   62292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:00:06.311654   62292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:00:06.315786   62292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:00:06.323812   62292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:00:06.328051   62292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:00:06.615813   62292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:00:07.047983   62292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:00:07.613004   62292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:00:07.614017   62292 kubeadm.go:310] 
	I1105 19:00:07.614109   62292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:00:07.614143   62292 kubeadm.go:310] 
	I1105 19:00:07.614288   62292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:00:07.614297   62292 kubeadm.go:310] 
	I1105 19:00:07.614318   62292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:00:07.614412   62292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:00:07.614490   62292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:00:07.614504   62292 kubeadm.go:310] 
	I1105 19:00:07.614576   62292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:00:07.614587   62292 kubeadm.go:310] 
	I1105 19:00:07.614645   62292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:00:07.614657   62292 kubeadm.go:310] 
	I1105 19:00:07.614713   62292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:00:07.614831   62292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:00:07.614898   62292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:00:07.614905   62292 kubeadm.go:310] 
	I1105 19:00:07.615017   62292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:00:07.615110   62292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:00:07.615120   62292 kubeadm.go:310] 
	I1105 19:00:07.615232   62292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a8kban.ldgm3pcghf7vwr8w \
	I1105 19:00:07.615393   62292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:00:07.615437   62292 kubeadm.go:310] 	--control-plane 
	I1105 19:00:07.615447   62292 kubeadm.go:310] 
	I1105 19:00:07.615575   62292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:00:07.615587   62292 kubeadm.go:310] 
	I1105 19:00:07.615735   62292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a8kban.ldgm3pcghf7vwr8w \
	I1105 19:00:07.615902   62292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:00:07.617292   62292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:00:07.617372   62292 cni.go:84] Creating CNI manager for "bridge"
	I1105 19:00:07.620316   62292 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:00:07.621572   62292 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:00:07.635953   62292 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:00:07.666113   62292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:00:07.666185   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:00:07.666296   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-929548 minikube.k8s.io/updated_at=2024_11_05T19_00_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=enable-default-cni-929548 minikube.k8s.io/primary=true
	I1105 19:00:07.843557   62292 ops.go:34] apiserver oom_adj: -16
	I1105 19:00:07.843690   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:00:08.344129   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:00:08.843808   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:00:09.344083   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:00:09.844241   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:00:10.344570   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:00:10.843813   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:00:11.344616   62292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:00:11.441732   62292 kubeadm.go:1113] duration metric: took 3.775611148s to wait for elevateKubeSystemPrivileges
	I1105 19:00:11.441771   62292 kubeadm.go:394] duration metric: took 15.595179029s to StartCluster
	I1105 19:00:11.441794   62292 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:00:11.441884   62292 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:00:11.443921   62292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:00:11.444199   62292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 19:00:11.444199   62292 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:00:11.444297   62292 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:00:11.444385   62292 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-929548"
	I1105 19:00:11.444406   62292 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-929548"
	I1105 19:00:11.444406   62292 config.go:182] Loaded profile config "enable-default-cni-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:00:11.444413   62292 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-929548"
	I1105 19:00:11.444428   62292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-929548"
	I1105 19:00:11.444447   62292 host.go:66] Checking if "enable-default-cni-929548" exists ...
	I1105 19:00:11.444831   62292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:00:11.444866   62292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:00:11.444835   62292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:00:11.444990   62292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:00:11.445615   62292 out.go:177] * Verifying Kubernetes components...
	I1105 19:00:11.447044   62292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:00:11.462081   62292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I1105 19:00:11.462666   62292 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:00:11.463242   62292 main.go:141] libmachine: Using API Version  1
	I1105 19:00:11.463272   62292 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:00:11.463619   62292 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:00:11.463795   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetState
	I1105 19:00:11.465360   62292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I1105 19:00:11.465724   62292 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:00:11.466119   62292 main.go:141] libmachine: Using API Version  1
	I1105 19:00:11.466130   62292 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:00:11.466520   62292 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:00:11.466863   62292 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-929548"
	I1105 19:00:11.466888   62292 host.go:66] Checking if "enable-default-cni-929548" exists ...
	I1105 19:00:11.466913   62292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:00:11.466952   62292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:00:11.467357   62292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:00:11.467388   62292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:00:11.483367   62292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I1105 19:00:11.483944   62292 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:00:11.484506   62292 main.go:141] libmachine: Using API Version  1
	I1105 19:00:11.484525   62292 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:00:11.484549   62292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I1105 19:00:11.484910   62292 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:00:11.484961   62292 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:00:11.485122   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetState
	I1105 19:00:11.485465   62292 main.go:141] libmachine: Using API Version  1
	I1105 19:00:11.485492   62292 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:00:11.485824   62292 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:00:11.486504   62292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:00:11.486542   62292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:00:11.487527   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 19:00:11.489116   62292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:00:11.762165   62635 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.525711694s)
	I1105 19:00:11.762200   62635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:00:11.762253   62635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:00:11.767230   62635 start.go:563] Will wait 60s for crictl version
	I1105 19:00:11.767293   62635 ssh_runner.go:195] Run: which crictl
	I1105 19:00:11.771041   62635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:00:11.823209   62635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:00:11.823315   62635 ssh_runner.go:195] Run: crio --version
	I1105 19:00:11.861523   62635 ssh_runner.go:195] Run: crio --version
	I1105 19:00:11.895066   62635 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:00:11.490286   62292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:00:11.490300   62292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:00:11.490315   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 19:00:11.493378   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 19:00:11.493793   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 19:00:11.493817   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 19:00:11.494081   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 19:00:11.494238   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 19:00:11.494348   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 19:00:11.494481   62292 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa Username:docker}
	I1105 19:00:11.508238   62292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I1105 19:00:11.508970   62292 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:00:11.509659   62292 main.go:141] libmachine: Using API Version  1
	I1105 19:00:11.509685   62292 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:00:11.510063   62292 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:00:11.510438   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetState
	I1105 19:00:11.512261   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .DriverName
	I1105 19:00:11.512529   62292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:00:11.512549   62292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:00:11.512569   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHHostname
	I1105 19:00:11.515628   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 19:00:11.515898   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:ae:72", ip: ""} in network mk-enable-default-cni-929548: {Iface:virbr4 ExpiryTime:2024-11-05 19:59:35 +0000 UTC Type:0 Mac:52:54:00:3d:ae:72 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:enable-default-cni-929548 Clientid:01:52:54:00:3d:ae:72}
	I1105 19:00:11.515924   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | domain enable-default-cni-929548 has defined IP address 192.168.72.73 and MAC address 52:54:00:3d:ae:72 in network mk-enable-default-cni-929548
	I1105 19:00:11.516102   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHPort
	I1105 19:00:11.516300   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHKeyPath
	I1105 19:00:11.516451   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .GetSSHUsername
	I1105 19:00:11.516617   62292 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/enable-default-cni-929548/id_rsa Username:docker}
	I1105 19:00:11.716810   62292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:00:11.716842   62292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 19:00:11.986490   62292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:00:12.020723   62292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:00:12.248518   62292 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1105 19:00:12.250229   62292 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-929548" to be "Ready" ...
	I1105 19:00:12.261073   62292 node_ready.go:49] node "enable-default-cni-929548" has status "Ready":"True"
	I1105 19:00:12.261103   62292 node_ready.go:38] duration metric: took 10.845414ms for node "enable-default-cni-929548" to be "Ready" ...
	I1105 19:00:12.261116   62292 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:00:12.269926   62292 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-5zqd4" in "kube-system" namespace to be "Ready" ...
	I1105 19:00:12.469601   62292 main.go:141] libmachine: Making call to close driver server
	I1105 19:00:12.469648   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .Close
	I1105 19:00:12.469965   62292 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:00:12.469985   62292 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:00:12.469994   62292 main.go:141] libmachine: Making call to close driver server
	I1105 19:00:12.470003   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .Close
	I1105 19:00:12.469967   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Closing plugin on server side
	I1105 19:00:12.470249   62292 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:00:12.470265   62292 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:00:12.476892   62292 main.go:141] libmachine: Making call to close driver server
	I1105 19:00:12.476922   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .Close
	I1105 19:00:12.477270   62292 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:00:12.477302   62292 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:00:12.755260   62292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-929548" context rescaled to 1 replicas
	I1105 19:00:13.065687   62292 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.044922443s)
	I1105 19:00:13.065763   62292 main.go:141] libmachine: Making call to close driver server
	I1105 19:00:13.065785   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .Close
	I1105 19:00:13.066079   62292 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:00:13.066097   62292 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:00:13.066106   62292 main.go:141] libmachine: Making call to close driver server
	I1105 19:00:13.066114   62292 main.go:141] libmachine: (enable-default-cni-929548) Calling .Close
	I1105 19:00:13.068066   62292 main.go:141] libmachine: (enable-default-cni-929548) DBG | Closing plugin on server side
	I1105 19:00:13.068099   62292 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:00:13.068108   62292 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:00:13.069866   62292 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1105 19:00:13.071031   62292 addons.go:510] duration metric: took 1.626739114s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1105 19:00:11.896717   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetIP
	I1105 19:00:11.900284   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 19:00:11.900770   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:30:ab", ip: ""} in network mk-kubernetes-upgrade-906991: {Iface:virbr3 ExpiryTime:2024-11-05 19:58:25 +0000 UTC Type:0 Mac:52:54:00:17:30:ab Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:kubernetes-upgrade-906991 Clientid:01:52:54:00:17:30:ab}
	I1105 19:00:11.900802   62635 main.go:141] libmachine: (kubernetes-upgrade-906991) DBG | domain kubernetes-upgrade-906991 has defined IP address 192.168.61.130 and MAC address 52:54:00:17:30:ab in network mk-kubernetes-upgrade-906991
	I1105 19:00:11.900998   62635 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:00:11.906617   62635 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:00:11.906751   62635 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:00:11.906813   62635 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:00:11.952895   62635 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:00:11.952926   62635 crio.go:433] Images already preloaded, skipping extraction
	I1105 19:00:11.952991   62635 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:00:11.989498   62635 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:00:11.989528   62635 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:00:11.989537   62635 kubeadm.go:934] updating node { 192.168.61.130 8443 v1.31.2 crio true true} ...
	I1105 19:00:11.989666   62635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-906991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:00:11.989751   62635 ssh_runner.go:195] Run: crio config
	I1105 19:00:12.046169   62635 cni.go:84] Creating CNI manager for ""
	I1105 19:00:12.046192   62635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:00:12.046201   62635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:00:12.046260   62635 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.130 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-906991 NodeName:kubernetes-upgrade-906991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:00:12.046373   62635 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-906991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:00:12.046433   62635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:00:12.057209   62635 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:00:12.057305   62635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:00:12.067835   62635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1105 19:00:12.085346   62635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:00:12.103070   62635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1105 19:00:12.124566   62635 ssh_runner.go:195] Run: grep 192.168.61.130	control-plane.minikube.internal$ /etc/hosts
	I1105 19:00:12.129235   62635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:00:12.295770   62635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:00:12.315375   62635 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991 for IP: 192.168.61.130
	I1105 19:00:12.315419   62635 certs.go:194] generating shared ca certs ...
	I1105 19:00:12.315445   62635 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:00:12.315648   62635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:00:12.315717   62635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:00:12.315730   62635 certs.go:256] generating profile certs ...
	I1105 19:00:12.315836   62635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/client.key
	I1105 19:00:12.315928   62635 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.key.30533d61
	I1105 19:00:12.315994   62635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.key
	I1105 19:00:12.316161   62635 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:00:12.316209   62635 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:00:12.316222   62635 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:00:12.316254   62635 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:00:12.316288   62635 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:00:12.316320   62635 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:00:12.316377   62635 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:00:12.317210   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:00:12.351220   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:00:12.382330   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:00:12.414490   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:00:12.441966   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 19:00:12.472803   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:00:12.498666   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:00:12.523996   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:00:12.558257   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:00:12.588631   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:00:12.619217   62635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:00:12.648550   62635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:00:12.667433   62635 ssh_runner.go:195] Run: openssl version
	I1105 19:00:12.673534   62635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:00:12.689540   62635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:00:12.696011   62635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:00:12.696115   62635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:00:12.703706   62635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:00:12.715006   62635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:00:12.730313   62635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:00:12.735085   62635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:00:12.735160   62635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:00:12.740905   62635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:00:12.750818   62635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:00:12.762399   62635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:00:12.767538   62635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:00:12.767619   62635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:00:12.773920   62635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:00:12.784329   62635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:00:12.789569   62635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:00:12.797418   62635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:00:12.805396   62635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:00:12.811609   62635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:00:12.818035   62635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:00:12.823904   62635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:00:12.832091   62635 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:00:12.832180   62635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:00:12.832246   62635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:00:12.881299   62635 cri.go:89] found id: "602b3956f8b6826be0edf7407c1ad7e99964808d66903854424f1e07c2f0eca3"
	I1105 19:00:12.881323   62635 cri.go:89] found id: "e1896a70d903c80b22abd49f08deadf8e645cb381e2a3677eef000d401a1b0e4"
	I1105 19:00:12.881328   62635 cri.go:89] found id: "a4490969aecfa443ae0d372331e9bc9288eabc11ac6b239e69341c41aa7d0f7d"
	I1105 19:00:12.881337   62635 cri.go:89] found id: "7396408c6debd9d2a67fc9dbcc00d608623d06861f9350cd0b7e6392ed4184c2"
	I1105 19:00:12.881341   62635 cri.go:89] found id: "136825bbb2d881396e40bd9bf55997433d07454cd282b87b109cb6fcf0aaf4b8"
	I1105 19:00:12.881360   62635 cri.go:89] found id: "59e4670de00c2f1bf551cc48cf4058428f4770205ed06adbe8613247ebadf6ba"
	I1105 19:00:12.881366   62635 cri.go:89] found id: "137867e1fcccb9bb2484cf6e5efa444cf2e8d76b998b1ac04753b3be0f6875bb"
	I1105 19:00:12.881371   62635 cri.go:89] found id: "a0489e905f52cbd8ee2961d7512d8b6c5518ec75bf7c5f2d4257760fae5732b7"
	I1105 19:00:12.881375   62635 cri.go:89] found id: "280b15c04f3694aa32862dc0256f24be571702798c16e7cb21c0f0c31522e94b"
	I1105 19:00:12.881386   62635 cri.go:89] found id: "b11f578ecbeed80846891f3d323dbed95971183629f05b440f03c7b787ccbe6d"
	I1105 19:00:12.881394   62635 cri.go:89] found id: "3d14bd3fafc0a7bb5f9c701f83b5fdc128036b6f1a23eef50fdb7b32f4c16625"
	I1105 19:00:12.881399   62635 cri.go:89] found id: "398113c3add9a03f448fa7d9a38649e0f83e3ce7573463be5c01d3e63bd82517"
	I1105 19:00:12.881404   62635 cri.go:89] found id: "3890debdd077a909695ee76caac951a41612f32ef44476c0d7c01285b17d3735"
	I1105 19:00:12.881411   62635 cri.go:89] found id: "bbe72fc4452bb5995d7d90e7099f474584f584687867e656973f617ffe7e51fe"
	I1105 19:00:12.881421   62635 cri.go:89] found id: "010374c5a434af3482f062f9d1ccdf04a3578036cddca28ae36c9a28941f3c26"
	I1105 19:00:12.881427   62635 cri.go:89] found id: ""
	I1105 19:00:12.881478   62635 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-906991 -n kubernetes-upgrade-906991
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-906991 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-906991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-906991
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-906991: (1.042321606s)
--- FAIL: TestKubernetesUpgrade (425.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (90.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-616842 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1105 18:57:14.491208   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:57:31.418687   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-616842 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.649306584s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-616842] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-616842" primary control-plane node in "pause-616842" cluster
	* Updating the running kvm2 "pause-616842" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-616842" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:56:49.422782   58421 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:56:49.423065   58421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:56:49.423075   58421 out.go:358] Setting ErrFile to fd 2...
	I1105 18:56:49.423080   58421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:56:49.423259   58421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:56:49.423768   58421 out.go:352] Setting JSON to false
	I1105 18:56:49.424622   58421 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5951,"bootTime":1730827058,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:56:49.424724   58421 start.go:139] virtualization: kvm guest
	I1105 18:56:49.426879   58421 out.go:177] * [pause-616842] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:56:49.428162   58421 notify.go:220] Checking for updates...
	I1105 18:56:49.428173   58421 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:56:49.429476   58421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:56:49.430855   58421 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:56:49.431984   58421 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:56:49.433171   58421 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:56:49.434525   58421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:56:49.436291   58421 config.go:182] Loaded profile config "pause-616842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:56:49.436908   58421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:56:49.436976   58421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:56:49.451517   58421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35565
	I1105 18:56:49.451847   58421 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:56:49.452392   58421 main.go:141] libmachine: Using API Version  1
	I1105 18:56:49.452414   58421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:56:49.452706   58421 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:56:49.452884   58421 main.go:141] libmachine: (pause-616842) Calling .DriverName
	I1105 18:56:49.453133   58421 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:56:49.453424   58421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:56:49.453487   58421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:56:49.467776   58421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I1105 18:56:49.468122   58421 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:56:49.468566   58421 main.go:141] libmachine: Using API Version  1
	I1105 18:56:49.468587   58421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:56:49.468858   58421 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:56:49.469033   58421 main.go:141] libmachine: (pause-616842) Calling .DriverName
	I1105 18:56:49.502008   58421 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:56:49.503145   58421 start.go:297] selected driver: kvm2
	I1105 18:56:49.503156   58421 start.go:901] validating driver "kvm2" against &{Name:pause-616842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.2 ClusterName:pause-616842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:56:49.503272   58421 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:56:49.503566   58421 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:56:49.503624   58421 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:56:49.518755   58421 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:56:49.519523   58421 cni.go:84] Creating CNI manager for ""
	I1105 18:56:49.519574   58421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:56:49.519624   58421 start.go:340] cluster config:
	{Name:pause-616842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-616842 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:56:49.519800   58421 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:56:49.521593   58421 out.go:177] * Starting "pause-616842" primary control-plane node in "pause-616842" cluster
	I1105 18:56:49.522994   58421 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:56:49.523038   58421 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:56:49.523048   58421 cache.go:56] Caching tarball of preloaded images
	I1105 18:56:49.523124   58421 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:56:49.523134   58421 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:56:49.523245   58421 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/pause-616842/config.json ...
	I1105 18:56:49.523463   58421 start.go:360] acquireMachinesLock for pause-616842: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:57:22.855526   58421 start.go:364] duration metric: took 33.331959469s to acquireMachinesLock for "pause-616842"
	I1105 18:57:22.855584   58421 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:57:22.855596   58421 fix.go:54] fixHost starting: 
	I1105 18:57:22.855936   58421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:57:22.855990   58421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:57:22.872955   58421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I1105 18:57:22.873381   58421 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:57:22.873840   58421 main.go:141] libmachine: Using API Version  1
	I1105 18:57:22.873865   58421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:57:22.874169   58421 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:57:22.874349   58421 main.go:141] libmachine: (pause-616842) Calling .DriverName
	I1105 18:57:22.874475   58421 main.go:141] libmachine: (pause-616842) Calling .GetState
	I1105 18:57:22.876028   58421 fix.go:112] recreateIfNeeded on pause-616842: state=Running err=<nil>
	W1105 18:57:22.876061   58421 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:57:22.878197   58421 out.go:177] * Updating the running kvm2 "pause-616842" VM ...
	I1105 18:57:22.879573   58421 machine.go:93] provisionDockerMachine start ...
	I1105 18:57:22.879594   58421 main.go:141] libmachine: (pause-616842) Calling .DriverName
	I1105 18:57:22.879762   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:22.882572   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:22.883117   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:22.883153   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:22.883326   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHPort
	I1105 18:57:22.883500   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:22.883673   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:22.883816   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHUsername
	I1105 18:57:22.883981   58421 main.go:141] libmachine: Using SSH client type: native
	I1105 18:57:22.884214   58421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I1105 18:57:22.884227   58421 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:57:22.992267   58421 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-616842
	
	I1105 18:57:22.992301   58421 main.go:141] libmachine: (pause-616842) Calling .GetMachineName
	I1105 18:57:22.992554   58421 buildroot.go:166] provisioning hostname "pause-616842"
	I1105 18:57:22.992582   58421 main.go:141] libmachine: (pause-616842) Calling .GetMachineName
	I1105 18:57:22.992759   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:22.995720   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:22.996123   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:22.996165   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:22.996315   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHPort
	I1105 18:57:22.996482   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:22.996659   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:22.996786   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHUsername
	I1105 18:57:22.996964   58421 main.go:141] libmachine: Using SSH client type: native
	I1105 18:57:22.997144   58421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I1105 18:57:22.997168   58421 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-616842 && echo "pause-616842" | sudo tee /etc/hostname
	I1105 18:57:23.123324   58421 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-616842
	
	I1105 18:57:23.123357   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:23.126453   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.126851   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:23.126884   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.127119   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHPort
	I1105 18:57:23.127285   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:23.127438   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:23.127558   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHUsername
	I1105 18:57:23.127725   58421 main.go:141] libmachine: Using SSH client type: native
	I1105 18:57:23.127944   58421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I1105 18:57:23.127971   58421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-616842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-616842/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-616842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:57:23.235756   58421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:57:23.235789   58421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 18:57:23.235812   58421 buildroot.go:174] setting up certificates
	I1105 18:57:23.235824   58421 provision.go:84] configureAuth start
	I1105 18:57:23.235838   58421 main.go:141] libmachine: (pause-616842) Calling .GetMachineName
	I1105 18:57:23.236110   58421 main.go:141] libmachine: (pause-616842) Calling .GetIP
	I1105 18:57:23.239435   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.239950   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:23.239984   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.240290   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:23.242857   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.243316   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:23.243343   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.243401   58421 provision.go:143] copyHostCerts
	I1105 18:57:23.243452   58421 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 18:57:23.243467   58421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 18:57:23.243521   58421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 18:57:23.243632   58421 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 18:57:23.243643   58421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 18:57:23.243663   58421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 18:57:23.243723   58421 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 18:57:23.243731   58421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 18:57:23.243756   58421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 18:57:23.243811   58421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.pause-616842 san=[127.0.0.1 192.168.39.64 localhost minikube pause-616842]
	I1105 18:57:23.410051   58421 provision.go:177] copyRemoteCerts
	I1105 18:57:23.410113   58421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:57:23.410160   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:23.413051   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.413422   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:23.413453   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.413618   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHPort
	I1105 18:57:23.413809   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:23.413986   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHUsername
	I1105 18:57:23.414098   58421 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/pause-616842/id_rsa Username:docker}
	I1105 18:57:23.501434   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:57:23.534036   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 18:57:23.558661   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:57:23.583252   58421 provision.go:87] duration metric: took 347.414864ms to configureAuth
	I1105 18:57:23.583285   58421 buildroot.go:189] setting minikube options for container-runtime
	I1105 18:57:23.583531   58421 config.go:182] Loaded profile config "pause-616842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:57:23.583630   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:23.586551   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.586912   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:23.586935   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:23.587132   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHPort
	I1105 18:57:23.587323   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:23.587499   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:23.587631   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHUsername
	I1105 18:57:23.587778   58421 main.go:141] libmachine: Using SSH client type: native
	I1105 18:57:23.587945   58421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I1105 18:57:23.587961   58421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:57:29.100996   58421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:57:29.101021   58421 machine.go:96] duration metric: took 6.221431737s to provisionDockerMachine
	I1105 18:57:29.101035   58421 start.go:293] postStartSetup for "pause-616842" (driver="kvm2")
	I1105 18:57:29.101049   58421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:57:29.101074   58421 main.go:141] libmachine: (pause-616842) Calling .DriverName
	I1105 18:57:29.101380   58421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:57:29.101412   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:29.104897   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.105313   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:29.105341   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.105544   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHPort
	I1105 18:57:29.105766   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:29.105972   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHUsername
	I1105 18:57:29.106119   58421 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/pause-616842/id_rsa Username:docker}
	I1105 18:57:29.193247   58421 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:57:29.198293   58421 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 18:57:29.198318   58421 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 18:57:29.198378   58421 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 18:57:29.198449   58421 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 18:57:29.198547   58421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:57:29.211544   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:57:29.238864   58421 start.go:296] duration metric: took 137.815718ms for postStartSetup
	I1105 18:57:29.238909   58421 fix.go:56] duration metric: took 6.383313521s for fixHost
	I1105 18:57:29.238934   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:29.242325   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.242678   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:29.242706   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.243027   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHPort
	I1105 18:57:29.243252   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:29.243461   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:29.243608   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHUsername
	I1105 18:57:29.243805   58421 main.go:141] libmachine: Using SSH client type: native
	I1105 18:57:29.244030   58421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I1105 18:57:29.244042   58421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 18:57:29.351772   58421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833049.339426535
	
	I1105 18:57:29.351796   58421 fix.go:216] guest clock: 1730833049.339426535
	I1105 18:57:29.351805   58421 fix.go:229] Guest: 2024-11-05 18:57:29.339426535 +0000 UTC Remote: 2024-11-05 18:57:29.238914921 +0000 UTC m=+39.853473998 (delta=100.511614ms)
	I1105 18:57:29.351836   58421 fix.go:200] guest clock delta is within tolerance: 100.511614ms
	I1105 18:57:29.351848   58421 start.go:83] releasing machines lock for "pause-616842", held for 6.496288038s
	I1105 18:57:29.351876   58421 main.go:141] libmachine: (pause-616842) Calling .DriverName
	I1105 18:57:29.352128   58421 main.go:141] libmachine: (pause-616842) Calling .GetIP
	I1105 18:57:29.354875   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.355250   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:29.355288   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.355388   58421 main.go:141] libmachine: (pause-616842) Calling .DriverName
	I1105 18:57:29.355951   58421 main.go:141] libmachine: (pause-616842) Calling .DriverName
	I1105 18:57:29.356102   58421 main.go:141] libmachine: (pause-616842) Calling .DriverName
	I1105 18:57:29.356173   58421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:57:29.356231   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:29.356325   58421 ssh_runner.go:195] Run: cat /version.json
	I1105 18:57:29.356365   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHHostname
	I1105 18:57:29.359243   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.359539   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.359664   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:29.359686   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.359931   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:29.359959   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHPort
	I1105 18:57:29.359961   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:29.360119   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHPort
	I1105 18:57:29.360159   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:29.360284   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHUsername
	I1105 18:57:29.360321   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHKeyPath
	I1105 18:57:29.360437   58421 main.go:141] libmachine: (pause-616842) Calling .GetSSHUsername
	I1105 18:57:29.360428   58421 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/pause-616842/id_rsa Username:docker}
	I1105 18:57:29.360618   58421 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/pause-616842/id_rsa Username:docker}
	I1105 18:57:29.473903   58421 ssh_runner.go:195] Run: systemctl --version
	I1105 18:57:29.480227   58421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:57:29.651672   58421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 18:57:29.657880   58421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 18:57:29.657942   58421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:57:29.667029   58421 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:57:29.667055   58421 start.go:495] detecting cgroup driver to use...
	I1105 18:57:29.667120   58421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:57:29.682788   58421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:57:29.696913   58421 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:57:29.696986   58421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:57:29.710612   58421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:57:29.724261   58421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:57:29.852952   58421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:57:29.980876   58421 docker.go:233] disabling docker service ...
	I1105 18:57:29.980959   58421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:57:29.996203   58421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:57:30.010506   58421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:57:30.141336   58421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:57:30.278322   58421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:57:30.292264   58421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:57:30.313101   58421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:57:30.313169   58421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:57:30.324318   58421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:57:30.324388   58421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:57:30.334317   58421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:57:30.344575   58421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:57:30.355427   58421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:57:30.366584   58421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:57:30.376848   58421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:57:30.387656   58421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:57:30.398129   58421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:57:30.407582   58421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:57:30.417037   58421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:57:30.553162   58421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:57:30.763302   58421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:57:30.763374   58421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:57:30.768355   58421 start.go:563] Will wait 60s for crictl version
	I1105 18:57:30.768412   58421 ssh_runner.go:195] Run: which crictl
	I1105 18:57:30.772020   58421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:57:30.806689   58421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 18:57:30.806779   58421 ssh_runner.go:195] Run: crio --version
	I1105 18:57:30.834387   58421 ssh_runner.go:195] Run: crio --version
	I1105 18:57:30.868288   58421 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 18:57:30.869574   58421 main.go:141] libmachine: (pause-616842) Calling .GetIP
	I1105 18:57:30.872528   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:30.872867   58421 main.go:141] libmachine: (pause-616842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:01:65", ip: ""} in network mk-pause-616842: {Iface:virbr1 ExpiryTime:2024-11-05 19:55:43 +0000 UTC Type:0 Mac:52:54:00:60:01:65 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:pause-616842 Clientid:01:52:54:00:60:01:65}
	I1105 18:57:30.872896   58421 main.go:141] libmachine: (pause-616842) DBG | domain pause-616842 has defined IP address 192.168.39.64 and MAC address 52:54:00:60:01:65 in network mk-pause-616842
	I1105 18:57:30.873102   58421 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 18:57:30.877269   58421 kubeadm.go:883] updating cluster {Name:pause-616842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-616842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-pl
ugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:57:30.877427   58421 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:57:30.877490   58421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:57:30.924788   58421 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:57:30.924811   58421 crio.go:433] Images already preloaded, skipping extraction
	I1105 18:57:30.924854   58421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:57:30.966152   58421 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:57:30.966184   58421 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:57:30.966194   58421 kubeadm.go:934] updating node { 192.168.39.64 8443 v1.31.2 crio true true} ...
	I1105 18:57:30.966310   58421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-616842 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-616842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:57:30.966411   58421 ssh_runner.go:195] Run: crio config
	I1105 18:57:31.012292   58421 cni.go:84] Creating CNI manager for ""
	I1105 18:57:31.012319   58421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:57:31.012330   58421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:57:31.012355   58421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-616842 NodeName:pause-616842 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:57:31.012502   58421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-616842"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.64"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:57:31.012576   58421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:57:31.022646   58421 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:57:31.022722   58421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 18:57:31.032378   58421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 18:57:31.051493   58421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:57:31.069464   58421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1105 18:57:31.085752   58421 ssh_runner.go:195] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I1105 18:57:31.089607   58421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:57:31.223435   58421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:57:31.237222   58421 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/pause-616842 for IP: 192.168.39.64
	I1105 18:57:31.237252   58421 certs.go:194] generating shared ca certs ...
	I1105 18:57:31.237272   58421 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:57:31.237453   58421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 18:57:31.237513   58421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 18:57:31.237529   58421 certs.go:256] generating profile certs ...
	I1105 18:57:31.237634   58421 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/pause-616842/client.key
	I1105 18:57:31.237713   58421 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/pause-616842/apiserver.key.e202c1c5
	I1105 18:57:31.237767   58421 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/pause-616842/proxy-client.key
	I1105 18:57:31.237918   58421 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 18:57:31.237971   58421 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 18:57:31.237985   58421 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 18:57:31.238025   58421 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 18:57:31.238061   58421 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:57:31.238092   58421 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 18:57:31.238155   58421 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 18:57:31.239090   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:57:31.263300   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 18:57:31.285662   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:57:31.309399   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 18:57:31.403288   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/pause-616842/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1105 18:57:31.491956   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/pause-616842/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 18:57:31.594502   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/pause-616842/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:57:31.723107   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/pause-616842/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 18:57:31.828890   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 18:57:31.927308   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:57:32.028360   58421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 18:57:32.106958   58421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:57:32.174476   58421 ssh_runner.go:195] Run: openssl version
	I1105 18:57:32.201508   58421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 18:57:32.231622   58421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 18:57:32.247208   58421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 18:57:32.247278   58421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 18:57:32.261303   58421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:57:32.274552   58421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:57:32.288837   58421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:57:32.293991   58421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:57:32.294060   58421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:57:32.301662   58421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:57:32.313077   58421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 18:57:32.324554   58421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 18:57:32.331350   58421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 18:57:32.331412   58421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 18:57:32.344111   58421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 18:57:32.385434   58421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:57:32.393146   58421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 18:57:32.430367   58421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 18:57:32.439202   58421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 18:57:32.444718   58421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 18:57:32.452152   58421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 18:57:32.457687   58421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 18:57:32.465429   58421 kubeadm.go:392] StartCluster: {Name:pause-616842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-616842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugi
n:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:57:32.465573   58421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:57:32.465626   58421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:57:32.590094   58421 cri.go:89] found id: "26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9"
	I1105 18:57:32.590115   58421 cri.go:89] found id: "13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257"
	I1105 18:57:32.590119   58421 cri.go:89] found id: "7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d"
	I1105 18:57:32.590123   58421 cri.go:89] found id: "d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283"
	I1105 18:57:32.590125   58421 cri.go:89] found id: "0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9"
	I1105 18:57:32.590129   58421 cri.go:89] found id: "572ec4f05bc8f5536927babf4eef2cbfa6fea9c53bffbeaae20a426c31f68c4e"
	I1105 18:57:32.590131   58421 cri.go:89] found id: "593896013ec32e61042c8330f1332c77130a2d66990f2ac7d8438c3d24a44979"
	I1105 18:57:32.590134   58421 cri.go:89] found id: "1a115c11bc69bca1c2bb2fc3b9a0200ae44d9f6a45d2f09e780d0651c044559d"
	I1105 18:57:32.590136   58421 cri.go:89] found id: "46075292e61cbc8310529e78d039bce5900f7c24d4b3646ab648cfd7e066c512"
	I1105 18:57:32.590143   58421 cri.go:89] found id: "32786f4e197daabb2f9859d5e6ce025f5866614e54874e9eededdc4506c25a94"
	I1105 18:57:32.590146   58421 cri.go:89] found id: "cb870bbdc7b62cc4c9aee1dce7dda135b4c921b174ad6316806591a0cda8153c"
	I1105 18:57:32.590148   58421 cri.go:89] found id: ""
	I1105 18:57:32.590187   58421 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-616842 -n pause-616842
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-616842 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-616842 logs -n 25: (1.432313733s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-026921      | stopped-upgrade-026921    | jenkins | v1.34.0 | 05 Nov 24 18:56 UTC | 05 Nov 24 18:56 UTC |
	| start   | -p kindnet-929548              | kindnet-929548            | jenkins | v1.34.0 | 05 Nov 24 18:56 UTC | 05 Nov 24 18:58 UTC |
	|         | --memory=3072                  |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --wait-timeout=15m             |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-616842                | pause-616842              | jenkins | v1.34.0 | 05 Nov 24 18:56 UTC | 05 Nov 24 18:58 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 pgrep -a        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:57 UTC | 05 Nov 24 18:57 UTC |
	|         | kubelet                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-929548 pgrep -a     | kindnet-929548            | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | kubelet                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-906991   | kubernetes-upgrade-906991 | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	| ssh     | -p auto-929548 sudo cat        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /etc/nsswitch.conf             |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /etc/hosts                     |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /etc/resolv.conf               |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo crictl     | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | pods                           |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo crictl ps  | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | --all                          |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo find       | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /etc/cni -type f -exec sh -c   |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo ip a s     | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	| ssh     | -p auto-929548 sudo ip r s     | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	| start   | -p kubernetes-upgrade-906991   | kubernetes-upgrade-906991 | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo            | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | iptables-save                  |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo iptables   | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | -t nat -L -n -v                |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl  | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | status kubelet --all --full    |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl  | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | cat kubelet --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo journalctl | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | -xeu kubelet --all --full      |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /etc/kubernetes/kubelet.conf   |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /var/lib/kubelet/config.yaml   |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl  | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | status docker --all --full     |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl  | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | cat docker --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | /etc/docker/daemon.json        |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:58:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:58:13.739379   59622 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:58:13.739623   59622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:58:13.739633   59622 out.go:358] Setting ErrFile to fd 2...
	I1105 18:58:13.739643   59622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:58:13.739862   59622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:58:13.740424   59622 out.go:352] Setting JSON to false
	I1105 18:58:13.741584   59622 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6036,"bootTime":1730827058,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:58:13.741698   59622 start.go:139] virtualization: kvm guest
	I1105 18:58:13.744158   59622 out.go:177] * [kubernetes-upgrade-906991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:58:13.745695   59622 notify.go:220] Checking for updates...
	I1105 18:58:13.745709   59622 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:58:13.747095   59622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:58:13.748517   59622 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:58:13.749939   59622 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:58:13.751230   59622 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:58:13.752390   59622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:58:13.754214   59622 config.go:182] Loaded profile config "kubernetes-upgrade-906991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 18:58:13.754599   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:58:13.754640   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:58:13.770829   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I1105 18:58:13.771222   59622 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:58:13.771763   59622 main.go:141] libmachine: Using API Version  1
	I1105 18:58:13.771807   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:58:13.772251   59622 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:58:13.772439   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:58:13.772741   59622 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:58:13.773111   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:58:13.773146   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:58:13.789930   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I1105 18:58:13.790362   59622 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:58:13.790962   59622 main.go:141] libmachine: Using API Version  1
	I1105 18:58:13.791045   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:58:13.791443   59622 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:58:13.791628   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:58:13.827533   59622 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:58:13.828890   59622 start.go:297] selected driver: kvm2
	I1105 18:58:13.828911   59622 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:58:13.829018   59622 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:58:13.829704   59622 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:58:13.829784   59622 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:58:13.844850   59622 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:58:13.845232   59622 cni.go:84] Creating CNI manager for ""
	I1105 18:58:13.845280   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:58:13.845309   59622 start.go:340] cluster config:
	{Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-906991 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:58:13.845414   59622 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:58:13.847788   59622 out.go:177] * Starting "kubernetes-upgrade-906991" primary control-plane node in "kubernetes-upgrade-906991" cluster
	I1105 18:58:13.848991   59622 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:58:13.849057   59622 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:58:13.849072   59622 cache.go:56] Caching tarball of preloaded images
	I1105 18:58:13.849157   59622 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:58:13.849171   59622 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:58:13.849264   59622 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/config.json ...
	I1105 18:58:13.849462   59622 start.go:360] acquireMachinesLock for kubernetes-upgrade-906991: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:58:13.849513   59622 start.go:364] duration metric: took 31.107µs to acquireMachinesLock for "kubernetes-upgrade-906991"
	I1105 18:58:13.849538   59622 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:58:13.849548   59622 fix.go:54] fixHost starting: 
	I1105 18:58:13.849897   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:58:13.849938   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:58:13.867099   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I1105 18:58:13.867534   59622 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:58:13.868022   59622 main.go:141] libmachine: Using API Version  1
	I1105 18:58:13.868050   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:58:13.868335   59622 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:58:13.868532   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:58:13.868685   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetState
	I1105 18:58:13.870560   59622 fix.go:112] recreateIfNeeded on kubernetes-upgrade-906991: state=Stopped err=<nil>
	I1105 18:58:13.870589   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	W1105 18:58:13.870777   59622 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:58:13.872403   59622 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-906991" ...
	I1105 18:58:12.592399   58421 addons.go:510] duration metric: took 3.308253ms for enable addons: enabled=[]
	I1105 18:58:12.592452   58421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:58:12.801860   58421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:58:12.819908   58421 node_ready.go:35] waiting up to 6m0s for node "pause-616842" to be "Ready" ...
	I1105 18:58:12.823418   58421 node_ready.go:49] node "pause-616842" has status "Ready":"True"
	I1105 18:58:12.823446   58421 node_ready.go:38] duration metric: took 3.502143ms for node "pause-616842" to be "Ready" ...
	I1105 18:58:12.823458   58421 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:58:12.828668   58421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gwz48" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.148492   58421 pod_ready.go:93] pod "coredns-7c65d6cfc9-gwz48" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:13.148520   58421 pod_ready.go:82] duration metric: took 319.827492ms for pod "coredns-7c65d6cfc9-gwz48" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.148531   58421 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.547838   58421 pod_ready.go:93] pod "etcd-pause-616842" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:13.547864   58421 pod_ready.go:82] duration metric: took 399.325971ms for pod "etcd-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.547876   58421 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.947091   58421 pod_ready.go:93] pod "kube-apiserver-pause-616842" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:13.947116   58421 pod_ready.go:82] duration metric: took 399.231462ms for pod "kube-apiserver-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.947130   58421 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:14.346946   58421 pod_ready.go:93] pod "kube-controller-manager-pause-616842" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:14.346996   58421 pod_ready.go:82] duration metric: took 399.855267ms for pod "kube-controller-manager-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:14.347012   58421 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mgld6" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:14.747163   58421 pod_ready.go:93] pod "kube-proxy-mgld6" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:14.747193   58421 pod_ready.go:82] duration metric: took 400.172502ms for pod "kube-proxy-mgld6" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:14.747207   58421 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:15.147608   58421 pod_ready.go:93] pod "kube-scheduler-pause-616842" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:15.147637   58421 pod_ready.go:82] duration metric: took 400.4217ms for pod "kube-scheduler-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:15.147648   58421 pod_ready.go:39] duration metric: took 2.324179253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:58:15.147663   58421 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:58:15.147718   58421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:58:15.168745   58421 api_server.go:72] duration metric: took 2.579797777s to wait for apiserver process to appear ...
	I1105 18:58:15.168770   58421 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:58:15.168793   58421 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I1105 18:58:15.174177   58421 api_server.go:279] https://192.168.39.64:8443/healthz returned 200:
	ok
	I1105 18:58:15.175515   58421 api_server.go:141] control plane version: v1.31.2
	I1105 18:58:15.175542   58421 api_server.go:131] duration metric: took 6.764268ms to wait for apiserver health ...
	I1105 18:58:15.175552   58421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:58:15.349248   58421 system_pods.go:59] 6 kube-system pods found
	I1105 18:58:15.349278   58421 system_pods.go:61] "coredns-7c65d6cfc9-gwz48" [3cc42000-c8d8-452e-bc62-746d6be5a2cd] Running
	I1105 18:58:15.349282   58421 system_pods.go:61] "etcd-pause-616842" [6dcc846c-1784-482c-a494-ecf982fabbc9] Running
	I1105 18:58:15.349286   58421 system_pods.go:61] "kube-apiserver-pause-616842" [c1eb3f28-e7ea-4d1c-99ae-697450596e05] Running
	I1105 18:58:15.349290   58421 system_pods.go:61] "kube-controller-manager-pause-616842" [64e7490f-32fe-45fa-8954-38e81e9d70d0] Running
	I1105 18:58:15.349293   58421 system_pods.go:61] "kube-proxy-mgld6" [87a4c6d0-e674-4674-9c7b-0f859104617f] Running
	I1105 18:58:15.349296   58421 system_pods.go:61] "kube-scheduler-pause-616842" [b1228ca0-79cf-4410-9d06-14fb16656d70] Running
	I1105 18:58:15.349304   58421 system_pods.go:74] duration metric: took 173.739901ms to wait for pod list to return data ...
	I1105 18:58:15.349313   58421 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:58:15.547600   58421 default_sa.go:45] found service account: "default"
	I1105 18:58:15.547630   58421 default_sa.go:55] duration metric: took 198.303294ms for default service account to be created ...
	I1105 18:58:15.547639   58421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:58:15.749277   58421 system_pods.go:86] 6 kube-system pods found
	I1105 18:58:15.749304   58421 system_pods.go:89] "coredns-7c65d6cfc9-gwz48" [3cc42000-c8d8-452e-bc62-746d6be5a2cd] Running
	I1105 18:58:15.749311   58421 system_pods.go:89] "etcd-pause-616842" [6dcc846c-1784-482c-a494-ecf982fabbc9] Running
	I1105 18:58:15.749316   58421 system_pods.go:89] "kube-apiserver-pause-616842" [c1eb3f28-e7ea-4d1c-99ae-697450596e05] Running
	I1105 18:58:15.749320   58421 system_pods.go:89] "kube-controller-manager-pause-616842" [64e7490f-32fe-45fa-8954-38e81e9d70d0] Running
	I1105 18:58:15.749323   58421 system_pods.go:89] "kube-proxy-mgld6" [87a4c6d0-e674-4674-9c7b-0f859104617f] Running
	I1105 18:58:15.749330   58421 system_pods.go:89] "kube-scheduler-pause-616842" [b1228ca0-79cf-4410-9d06-14fb16656d70] Running
	I1105 18:58:15.749337   58421 system_pods.go:126] duration metric: took 201.693086ms to wait for k8s-apps to be running ...
	I1105 18:58:15.749346   58421 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:58:15.749388   58421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:58:15.766238   58421 system_svc.go:56] duration metric: took 16.881054ms WaitForService to wait for kubelet
	I1105 18:58:15.766270   58421 kubeadm.go:582] duration metric: took 3.177327507s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:58:15.766292   58421 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:58:15.948036   58421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:58:15.948059   58421 node_conditions.go:123] node cpu capacity is 2
	I1105 18:58:15.948069   58421 node_conditions.go:105] duration metric: took 181.772439ms to run NodePressure ...
	I1105 18:58:15.948080   58421 start.go:241] waiting for startup goroutines ...
	I1105 18:58:15.948086   58421 start.go:246] waiting for cluster config update ...
	I1105 18:58:15.948093   58421 start.go:255] writing updated cluster config ...
	I1105 18:58:15.948389   58421 ssh_runner.go:195] Run: rm -f paused
	I1105 18:58:16.010350   58421 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 18:58:16.012936   58421 out.go:177] * Done! kubectl is now configured to use "pause-616842" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.750982351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=495bc3d2-9514-4410-a5be-48201261f33b name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.752391640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00119a42-e05d-491a-9276-e5b202e2baa8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.753066004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833096753028412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00119a42-e05d-491a-9276-e5b202e2baa8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.753732550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=813e7743-205b-4f90-906b-3c0c69d6a081 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.753801385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=813e7743-205b-4f90-906b-3c0c69d6a081 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.754098979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833075071584481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833075049314478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833075012083759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833075015947023,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833069965343705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833064961335368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730833052527701202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730833051794978477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730833051787751435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730833051779088256,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833051749960577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730833051754711062,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=813e7743-205b-4f90-906b-3c0c69d6a081 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.777599815Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=991324b8-36fa-467d-b8c4-c0efb9cecb61 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.777804894Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gwz48,Uid:3cc42000-c8d8-452e-bc62-746d6be5a2cd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730833051591024848,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05T18:56:14.349673371Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&PodSandboxMetadata{Name:kube-proxy-mgld6,Uid:87a4c6d0-e674-4674-9c7b-0f859104617f,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1730833051435682957,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05T18:56:14.245432322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-616842,Uid:e816343b7e3d162bf9c42ed292dadb66,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730833051425662098,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,tier: control-plane,},Annotations:map[string
]string{kubernetes.io/config.hash: e816343b7e3d162bf9c42ed292dadb66,kubernetes.io/config.seen: 2024-11-05T18:56:08.908439437Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&PodSandboxMetadata{Name:etcd-pause-616842,Uid:3871ba72dbd0aaadef86788757049bfe,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730833051410347216,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.64:2379,kubernetes.io/config.hash: 3871ba72dbd0aaadef86788757049bfe,kubernetes.io/config.seen: 2024-11-05T18:56:08.908440254Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945
878,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-616842,Uid:29b842f9012c691fa0c5e82891687772,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730833051400289493,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b842f9012c691fa0c5e82891687772,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29b842f9012c691fa0c5e82891687772,kubernetes.io/config.seen: 2024-11-05T18:56:08.908438463Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-616842,Uid:299229899d1c4f4e5705137cbf797041,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730833051369018481,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299229899d1c4f4e5705137cbf797041,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.64:8443,kubernetes.io/config.hash: 299229899d1c4f4e5705137cbf797041,kubernetes.io/config.seen: 2024-11-05T18:56:08.908433428Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=991324b8-36fa-467d-b8c4-c0efb9cecb61 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.779010068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=421254b4-1987-4faf-95fa-e6f9177b292e name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.779084566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=421254b4-1987-4faf-95fa-e6f9177b292e name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.779332678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833075071584481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833075049314478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833075012083759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833075015947023,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833069965343705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833064961335368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730833052527701202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730833051794978477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730833051787751435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730833051779088256,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833051749960577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730833051754711062,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=421254b4-1987-4faf-95fa-e6f9177b292e name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.801965021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b41956ce-1641-4caa-b468-e14f35786aad name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.802043377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b41956ce-1641-4caa-b468-e14f35786aad name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.803198321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1a40655-e458-44e3-87cd-2d4d9a97ac6f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.803685072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833096803645310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1a40655-e458-44e3-87cd-2d4d9a97ac6f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.804241342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11bbc680-d500-4f8c-8022-84d80a773a66 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.804297004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11bbc680-d500-4f8c-8022-84d80a773a66 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.804686892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833075071584481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833075049314478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833075012083759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833075015947023,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833069965343705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833064961335368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730833052527701202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730833051794978477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730833051787751435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730833051779088256,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833051749960577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730833051754711062,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11bbc680-d500-4f8c-8022-84d80a773a66 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.849073503Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae781680-36ef-4a81-8ee1-1cf233e0ed51 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.849207952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae781680-36ef-4a81-8ee1-1cf233e0ed51 name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.850775120Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fcf482d-bb5e-482a-b91b-a60c5412a866 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.851608497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833096851541239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fcf482d-bb5e-482a-b91b-a60c5412a866 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.852458905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39e256a7-dfea-464e-b727-25181d7a9cf4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.852532969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39e256a7-dfea-464e-b727-25181d7a9cf4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:16 pause-616842 crio[2353]: time="2024-11-05 18:58:16.852951028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833075071584481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833075049314478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833075012083759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833075015947023,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833069965343705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833064961335368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730833052527701202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730833051794978477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730833051787751435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730833051779088256,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833051749960577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730833051754711062,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39e256a7-dfea-464e-b727-25181d7a9cf4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39f18b945c536       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   21 seconds ago      Running             kube-controller-manager   2                   b472b31f05451       kube-controller-manager-pause-616842
	bde83f98d1dc5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   21 seconds ago      Running             kube-scheduler            2                   20966ef5e4afb       kube-scheduler-pause-616842
	d16f614ae991b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   21 seconds ago      Running             etcd                      2                   a6b3182b2a5d7       etcd-pause-616842
	a5a7324a56a62       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 seconds ago      Running             kube-apiserver            2                   f8b880d174a62       kube-apiserver-pause-616842
	5bd878d7b1093       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   26 seconds ago      Running             coredns                   2                   d572250113fb8       coredns-7c65d6cfc9-gwz48
	f196c53aa24e7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   31 seconds ago      Running             kube-proxy                2                   494e41fb0b968       kube-proxy-mgld6
	3a1c19c7f2307       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   44 seconds ago      Exited              coredns                   1                   d572250113fb8       coredns-7c65d6cfc9-gwz48
	26a30cd22fcda       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   45 seconds ago      Exited              kube-proxy                1                   494e41fb0b968       kube-proxy-mgld6
	13c0a0d364adb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   45 seconds ago      Exited              etcd                      1                   a6b3182b2a5d7       etcd-pause-616842
	7d7ecae7e52fd       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   45 seconds ago      Exited              kube-scheduler            1                   20966ef5e4afb       kube-scheduler-pause-616842
	d4352539586e6       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   45 seconds ago      Exited              kube-controller-manager   1                   b472b31f05451       kube-controller-manager-pause-616842
	0731f779ee1ae       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   45 seconds ago      Exited              kube-apiserver            1                   f8b880d174a62       kube-apiserver-pause-616842
	
	
	==> coredns [3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17] <==
	
	
	==> coredns [5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56928->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56928->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56936->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56936->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56940->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56940->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40869 - 4177 "HINFO IN 8507423948448639189.7213269714494655466. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011997848s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-616842
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-616842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=pause-616842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T18_56_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:56:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-616842
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:58:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:57:58 +0000   Tue, 05 Nov 2024 18:56:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:57:58 +0000   Tue, 05 Nov 2024 18:56:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:57:58 +0000   Tue, 05 Nov 2024 18:56:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:57:58 +0000   Tue, 05 Nov 2024 18:56:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    pause-616842
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ada336593ef44f384ea611980f2f7d9
	  System UUID:                1ada3365-93ef-44f3-84ea-611980f2f7d9
	  Boot ID:                    90aa5978-2430-49a7-9634-5f6665b572f8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gwz48                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m3s
	  kube-system                 etcd-pause-616842                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m8s
	  kube-system                 kube-apiserver-pause-616842             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-pause-616842    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-mgld6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-pause-616842             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m1s                   kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node pause-616842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node pause-616842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node pause-616842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m9s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m8s                   kubelet          Node pause-616842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s                   kubelet          Node pause-616842 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m8s                   kubelet          Node pause-616842 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m8s                   kubelet          Node pause-616842 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m4s                   node-controller  Node pause-616842 event: Registered Node pause-616842 in Controller
	  Normal  Starting                 23s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)      kubelet          Node pause-616842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)      kubelet          Node pause-616842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)      kubelet          Node pause-616842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                    node-controller  Node pause-616842 event: Registered Node pause-616842 in Controller
	
	
	==> dmesg <==
	[  +9.648610] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059874] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068569] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.170680] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.143856] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.292016] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.835513] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[Nov 5 18:56] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.063717] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.012474] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +0.092219] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.267582] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.114212] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.529563] kauditd_printk_skb: 88 callbacks suppressed
	[Nov 5 18:57] systemd-fstab-generator[2276]: Ignoring "noauto" option for root device
	[  +0.130724] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +0.157644] systemd-fstab-generator[2302]: Ignoring "noauto" option for root device
	[  +0.133667] systemd-fstab-generator[2314]: Ignoring "noauto" option for root device
	[  +0.276448] systemd-fstab-generator[2342]: Ignoring "noauto" option for root device
	[  +0.666999] systemd-fstab-generator[2464]: Ignoring "noauto" option for root device
	[ +12.373033] kauditd_printk_skb: 198 callbacks suppressed
	[ +10.828178] systemd-fstab-generator[3347]: Ignoring "noauto" option for root device
	[  +0.740184] kauditd_printk_skb: 24 callbacks suppressed
	[Nov 5 18:58] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.972721] systemd-fstab-generator[3700]: Ignoring "noauto" option for root device
	
	
	==> etcd [13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257] <==
	{"level":"info","ts":"2024-11-05T18:57:32.597880Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-11-05T18:57:32.646271Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","commit-index":474}
	{"level":"info","ts":"2024-11-05T18:57:32.646402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=()"}
	{"level":"info","ts":"2024-11-05T18:57:32.646468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became follower at term 2"}
	{"level":"info","ts":"2024-11-05T18:57:32.646488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 7dcc3547d111063c [peers: [], term: 2, commit: 474, applied: 0, lastindex: 474, lastterm: 2]"}
	{"level":"warn","ts":"2024-11-05T18:57:32.653039Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-11-05T18:57:32.673752Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":447}
	{"level":"info","ts":"2024-11-05T18:57:32.694949Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-11-05T18:57:32.715710Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"7dcc3547d111063c","timeout":"7s"}
	{"level":"info","ts":"2024-11-05T18:57:32.716063Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"7dcc3547d111063c"}
	{"level":"info","ts":"2024-11-05T18:57:32.716122Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"7dcc3547d111063c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-11-05T18:57:32.716644Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:57:32.726228Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-11-05T18:57:32.727421Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7dcc3547d111063c","initial-advertise-peer-urls":["https://192.168.39.64:2380"],"listen-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-11-05T18:57:32.727495Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-11-05T18:57:32.726566Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-11-05T18:57:32.726738Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T18:57:32.727586Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T18:57:32.727596Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T18:57:32.736811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=(9064678732556469820)"}
	{"level":"info","ts":"2024-11-05T18:57:32.727051Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-11-05T18:57:32.739877Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-11-05T18:57:32.739722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","added-peer-id":"7dcc3547d111063c","added-peer-peer-urls":["https://192.168.39.64:2380"]}
	{"level":"info","ts":"2024-11-05T18:57:32.740083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T18:57:32.740172Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> etcd [d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee] <==
	{"level":"info","ts":"2024-11-05T18:57:56.430898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c is starting a new election at term 2"}
	{"level":"info","ts":"2024-11-05T18:57:56.430955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-11-05T18:57:56.430989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgPreVoteResp from 7dcc3547d111063c at term 2"}
	{"level":"info","ts":"2024-11-05T18:57:56.431011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became candidate at term 3"}
	{"level":"info","ts":"2024-11-05T18:57:56.431017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgVoteResp from 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-11-05T18:57:56.431026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became leader at term 3"}
	{"level":"info","ts":"2024-11-05T18:57:56.431033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7dcc3547d111063c elected leader 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-11-05T18:57:56.435075Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7dcc3547d111063c","local-member-attributes":"{Name:pause-616842 ClientURLs:[https://192.168.39.64:2379]}","request-path":"/0/members/7dcc3547d111063c/attributes","cluster-id":"c3619ef1effce12d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T18:57:56.435282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:57:56.439902Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:57:56.442183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:57:56.446502Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T18:57:56.446540Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-05T18:57:56.450931Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:57:56.456210Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-05T18:57:56.463573Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.64:2379"}
	{"level":"warn","ts":"2024-11-05T18:57:58.620941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.953159ms","expected-duration":"100ms","prefix":"","request":"header:<ID:449395681120755084 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-616842.18052782d32ee607\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-616842.18052782d32ee607\" value_size:584 lease:449395681120755075 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-11-05T18:57:58.621128Z","caller":"traceutil/trace.go:171","msg":"trace[883510494] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:486; }","duration":"160.108883ms","start":"2024-11-05T18:57:58.461009Z","end":"2024-11-05T18:57:58.621118Z","steps":["trace[883510494] 'read index received'  (duration: 110.094277ms)","trace[883510494] 'applied index is now lower than readState.Index'  (duration: 50.013849ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T18:57:58.621206Z","caller":"traceutil/trace.go:171","msg":"trace[2079350925] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"353.613104ms","start":"2024-11-05T18:57:58.267587Z","end":"2024-11-05T18:57:58.621200Z","steps":["trace[2079350925] 'process raft request'  (duration: 353.483507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:57:58.621245Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:57:58.267572Z","time spent":"353.650675ms","remote":"127.0.0.1:38056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":533,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-616842.18052783aaf07d24\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-616842.18052783aaf07d24\" value_size:461 lease:449395681120755082 >> failure:<>"}
	{"level":"info","ts":"2024-11-05T18:57:58.621273Z","caller":"traceutil/trace.go:171","msg":"trace[1531555745] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"353.902961ms","start":"2024-11-05T18:57:58.267247Z","end":"2024-11-05T18:57:58.621150Z","steps":["trace[1531555745] 'process raft request'  (duration: 20.190726ms)","trace[1531555745] 'compare'  (duration: 332.821976ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T18:57:58.621524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.536495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T18:57:58.621590Z","caller":"traceutil/trace.go:171","msg":"trace[2099495959] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:456; }","duration":"160.627309ms","start":"2024-11-05T18:57:58.460953Z","end":"2024-11-05T18:57:58.621581Z","steps":["trace[2099495959] 'agreement among raft nodes before linearized reading'  (duration: 160.475477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:57:58.622402Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:57:58.267235Z","time spent":"354.184229ms","remote":"127.0.0.1:37630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":656,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-616842.18052782d32ee607\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-616842.18052782d32ee607\" value_size:584 lease:449395681120755075 >> failure:<>"}
	{"level":"info","ts":"2024-11-05T18:58:10.932600Z","caller":"traceutil/trace.go:171","msg":"trace[1508301133] transaction","detail":"{read_only:false; response_revision:518; number_of_response:1; }","duration":"129.445555ms","start":"2024-11-05T18:58:10.803118Z","end":"2024-11-05T18:58:10.932564Z","steps":["trace[1508301133] 'process raft request'  (duration: 129.323697ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:58:17 up 2 min,  0 users,  load average: 1.07, 0.55, 0.21
	Linux pause-616842 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9] <==
	I1105 18:57:32.288428       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:57:32.961600       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1105 18:57:32.981921       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1105 18:57:32.986526       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1105 18:57:32.987246       1 instance.go:232] Using reconciler: lease
	I1105 18:57:32.986447       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W1105 18:57:33.082022       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:45592->127.0.0.1:2379: read: connection reset by peer"
	W1105 18:57:33.082196       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:45576->127.0.0.1:2379: read: connection reset by peer"
	W1105 18:57:33.082343       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:45582->127.0.0.1:2379: read: connection reset by peer"
	W1105 18:57:34.082512       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:34.082910       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:34.083074       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:35.591080       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:35.677411       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:35.893137       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:37.986550       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:38.177507       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:38.674184       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:41.985521       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:42.683702       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:42.992240       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:47.869642       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:48.066333       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:49.268097       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1105 18:57:52.989430       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629] <==
	I1105 18:57:58.079096       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:57:58.079201       1 policy_source.go:224] refreshing policies
	I1105 18:57:58.079955       1 shared_informer.go:320] Caches are synced for configmaps
	I1105 18:57:58.127102       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 18:57:58.139625       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1105 18:57:58.146937       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1105 18:57:58.147175       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 18:57:58.149359       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1105 18:57:58.149408       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1105 18:57:58.149618       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1105 18:57:58.158430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1105 18:57:58.158528       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1105 18:57:58.158611       1 aggregator.go:171] initial CRD sync complete...
	I1105 18:57:58.158635       1 autoregister_controller.go:144] Starting autoregister controller
	I1105 18:57:58.158640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1105 18:57:58.158645       1 cache.go:39] Caches are synced for autoregister controller
	I1105 18:57:58.184978       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1105 18:57:58.945693       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1105 18:57:59.350156       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1105 18:57:59.362064       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1105 18:57:59.401876       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1105 18:57:59.429130       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1105 18:57:59.436057       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1105 18:58:01.619760       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:58:01.683485       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4] <==
	I1105 18:58:01.565271       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1105 18:58:01.568591       1 shared_informer.go:320] Caches are synced for daemon sets
	I1105 18:58:01.570709       1 shared_informer.go:320] Caches are synced for ephemeral
	I1105 18:58:01.576722       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1105 18:58:01.576871       1 shared_informer.go:320] Caches are synced for stateful set
	I1105 18:58:01.584905       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1105 18:58:01.585007       1 shared_informer.go:320] Caches are synced for GC
	I1105 18:58:01.585064       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1105 18:58:01.585107       1 shared_informer.go:320] Caches are synced for job
	I1105 18:58:01.596034       1 shared_informer.go:320] Caches are synced for resource quota
	I1105 18:58:01.610619       1 shared_informer.go:320] Caches are synced for endpoint
	I1105 18:58:01.612453       1 shared_informer.go:320] Caches are synced for deployment
	I1105 18:58:01.618615       1 shared_informer.go:320] Caches are synced for disruption
	I1105 18:58:01.627308       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1105 18:58:01.627723       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1105 18:58:01.627899       1 shared_informer.go:320] Caches are synced for HPA
	I1105 18:58:01.627892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.034µs"
	I1105 18:58:01.629192       1 shared_informer.go:320] Caches are synced for persistent volume
	I1105 18:58:01.629468       1 shared_informer.go:320] Caches are synced for attach detach
	I1105 18:58:02.010200       1 shared_informer.go:320] Caches are synced for garbage collector
	I1105 18:58:02.076912       1 shared_informer.go:320] Caches are synced for garbage collector
	I1105 18:58:02.076954       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1105 18:58:02.395575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.69058ms"
	I1105 18:58:02.413015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.347915ms"
	I1105 18:58:02.414617       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.069µs"
	
	
	==> kube-controller-manager [d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283] <==
	I1105 18:57:33.269128       1 serving.go:386] Generated self-signed cert in-memory
	I1105 18:57:33.473481       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1105 18:57:33.473530       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:57:33.475452       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1105 18:57:33.476103       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1105 18:57:33.476204       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1105 18:57:33.476257       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9] <==
	
	
	==> kube-proxy [f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d] <==
	 >
	E1105 18:57:45.107501       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:57:53.996376       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-616842\": dial tcp 192.168.39.64:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.64:38128->192.168.39.64:8443: read: connection reset by peer"
	E1105 18:57:55.173062       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-616842\": dial tcp 192.168.39.64:8443: connect: connection refused"
	I1105 18:57:58.142622       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.64"]
	E1105 18:57:58.142794       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:57:58.249201       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:57:58.249240       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:57:58.249296       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:57:58.252471       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:57:58.252773       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:57:58.252814       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:57:58.256420       1 config.go:199] "Starting service config controller"
	I1105 18:57:58.256478       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:57:58.256533       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:57:58.256558       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:57:58.257826       1 config.go:328] "Starting node config controller"
	I1105 18:57:58.257902       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:57:58.357670       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:57:58.357731       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:57:58.357993       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d] <==
	I1105 18:57:33.742154       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122] <==
	I1105 18:57:56.184073       1 serving.go:386] Generated self-signed cert in-memory
	W1105 18:57:58.046965       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1105 18:57:58.047094       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1105 18:57:58.047124       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1105 18:57:58.047194       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1105 18:57:58.134795       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1105 18:57:58.137248       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:57:58.145182       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1105 18:57:58.147463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1105 18:57:58.153166       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 18:57:58.147483       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1105 18:57:58.253949       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.940417    3354 kubelet_node_status.go:72] "Attempting to register node" node="pause-616842"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: E1105 18:57:54.941966    3354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.64:8443: connect: connection refused" node="pause-616842"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.982455    3354 scope.go:117] "RemoveContainer" containerID="13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.984465    3354 scope.go:117] "RemoveContainer" containerID="0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.993725    3354 scope.go:117] "RemoveContainer" containerID="7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.993957    3354 scope.go:117] "RemoveContainer" containerID="d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283"
	Nov 05 18:57:55 pause-616842 kubelet[3354]: E1105 18:57:55.144204    3354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-616842?timeout=10s\": dial tcp 192.168.39.64:8443: connect: connection refused" interval="800ms"
	Nov 05 18:57:55 pause-616842 kubelet[3354]: W1105 18:57:55.332994    3354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-616842&limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	Nov 05 18:57:55 pause-616842 kubelet[3354]: E1105 18:57:55.333128    3354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-616842&limit=500&resourceVersion=0\": dial tcp 192.168.39.64:8443: connect: connection refused" logger="UnhandledError"
	Nov 05 18:57:55 pause-616842 kubelet[3354]: I1105 18:57:55.343791    3354 kubelet_node_status.go:72] "Attempting to register node" node="pause-616842"
	Nov 05 18:57:55 pause-616842 kubelet[3354]: E1105 18:57:55.344699    3354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.64:8443: connect: connection refused" node="pause-616842"
	Nov 05 18:57:56 pause-616842 kubelet[3354]: I1105 18:57:56.146339    3354 kubelet_node_status.go:72] "Attempting to register node" node="pause-616842"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.193703    3354 kubelet_node_status.go:111] "Node was previously registered" node="pause-616842"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.194420    3354 kubelet_node_status.go:75] "Successfully registered node" node="pause-616842"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.194597    3354 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.196462    3354 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.519810    3354 apiserver.go:52] "Watching apiserver"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.538924    3354 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.621785    3354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87a4c6d0-e674-4674-9c7b-0f859104617f-lib-modules\") pod \"kube-proxy-mgld6\" (UID: \"87a4c6d0-e674-4674-9c7b-0f859104617f\") " pod="kube-system/kube-proxy-mgld6"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.621867    3354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87a4c6d0-e674-4674-9c7b-0f859104617f-xtables-lock\") pod \"kube-proxy-mgld6\" (UID: \"87a4c6d0-e674-4674-9c7b-0f859104617f\") " pod="kube-system/kube-proxy-mgld6"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: E1105 18:57:58.648345    3354 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-616842\" already exists" pod="kube-system/kube-controller-manager-pause-616842"
	Nov 05 18:58:04 pause-616842 kubelet[3354]: E1105 18:58:04.640676    3354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833084640392496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:58:04 pause-616842 kubelet[3354]: E1105 18:58:04.640717    3354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833084640392496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:58:14 pause-616842 kubelet[3354]: E1105 18:58:14.643411    3354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833094642457105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:58:14 pause-616842 kubelet[3354]: E1105 18:58:14.643452    3354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833094642457105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-616842 -n pause-616842
helpers_test.go:261: (dbg) Run:  kubectl --context pause-616842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-616842 -n pause-616842
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-616842 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-616842 logs -n 25: (1.511460118s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-929548 sudo crictl                           | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo crictl ps                        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | --all                                                |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo find                             | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo ip a s                           | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	| ssh     | -p auto-929548 sudo ip r s                           | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	| start   | -p kubernetes-upgrade-906991                         | kubernetes-upgrade-906991 | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo                                  | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo iptables                         | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl                        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | status kubelet --all --full                          |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl                        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | cat kubelet --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo journalctl                       | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | -xeu kubelet --all --full                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat                              | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat                              | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl                        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl                        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat                              | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo docker                           | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl                        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl                        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat                              | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat                              | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo                                  | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl                        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo systemctl                        | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC | 05 Nov 24 18:58 UTC |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-929548 sudo cat                              | auto-929548               | jenkins | v1.34.0 | 05 Nov 24 18:58 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:58:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:58:13.739379   59622 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:58:13.739623   59622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:58:13.739633   59622 out.go:358] Setting ErrFile to fd 2...
	I1105 18:58:13.739643   59622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:58:13.739862   59622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:58:13.740424   59622 out.go:352] Setting JSON to false
	I1105 18:58:13.741584   59622 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6036,"bootTime":1730827058,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:58:13.741698   59622 start.go:139] virtualization: kvm guest
	I1105 18:58:13.744158   59622 out.go:177] * [kubernetes-upgrade-906991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:58:13.745695   59622 notify.go:220] Checking for updates...
	I1105 18:58:13.745709   59622 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:58:13.747095   59622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:58:13.748517   59622 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:58:13.749939   59622 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:58:13.751230   59622 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:58:13.752390   59622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:58:13.754214   59622 config.go:182] Loaded profile config "kubernetes-upgrade-906991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 18:58:13.754599   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:58:13.754640   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:58:13.770829   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I1105 18:58:13.771222   59622 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:58:13.771763   59622 main.go:141] libmachine: Using API Version  1
	I1105 18:58:13.771807   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:58:13.772251   59622 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:58:13.772439   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:58:13.772741   59622 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:58:13.773111   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:58:13.773146   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:58:13.789930   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I1105 18:58:13.790362   59622 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:58:13.790962   59622 main.go:141] libmachine: Using API Version  1
	I1105 18:58:13.791045   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:58:13.791443   59622 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:58:13.791628   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:58:13.827533   59622 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:58:13.828890   59622 start.go:297] selected driver: kvm2
	I1105 18:58:13.828911   59622 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-906991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:58:13.829018   59622 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:58:13.829704   59622 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:58:13.829784   59622 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 18:58:13.844850   59622 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 18:58:13.845232   59622 cni.go:84] Creating CNI manager for ""
	I1105 18:58:13.845280   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 18:58:13.845309   59622 start.go:340] cluster config:
	{Name:kubernetes-upgrade-906991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-906991 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:58:13.845414   59622 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:58:13.847788   59622 out.go:177] * Starting "kubernetes-upgrade-906991" primary control-plane node in "kubernetes-upgrade-906991" cluster
	I1105 18:58:13.848991   59622 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:58:13.849057   59622 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 18:58:13.849072   59622 cache.go:56] Caching tarball of preloaded images
	I1105 18:58:13.849157   59622 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 18:58:13.849171   59622 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:58:13.849264   59622 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kubernetes-upgrade-906991/config.json ...
	I1105 18:58:13.849462   59622 start.go:360] acquireMachinesLock for kubernetes-upgrade-906991: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 18:58:13.849513   59622 start.go:364] duration metric: took 31.107µs to acquireMachinesLock for "kubernetes-upgrade-906991"
	I1105 18:58:13.849538   59622 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:58:13.849548   59622 fix.go:54] fixHost starting: 
	I1105 18:58:13.849897   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:58:13.849938   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:58:13.867099   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I1105 18:58:13.867534   59622 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:58:13.868022   59622 main.go:141] libmachine: Using API Version  1
	I1105 18:58:13.868050   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:58:13.868335   59622 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:58:13.868532   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	I1105 18:58:13.868685   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .GetState
	I1105 18:58:13.870560   59622 fix.go:112] recreateIfNeeded on kubernetes-upgrade-906991: state=Stopped err=<nil>
	I1105 18:58:13.870589   59622 main.go:141] libmachine: (kubernetes-upgrade-906991) Calling .DriverName
	W1105 18:58:13.870777   59622 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:58:13.872403   59622 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-906991" ...
	I1105 18:58:12.592399   58421 addons.go:510] duration metric: took 3.308253ms for enable addons: enabled=[]
	I1105 18:58:12.592452   58421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:58:12.801860   58421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:58:12.819908   58421 node_ready.go:35] waiting up to 6m0s for node "pause-616842" to be "Ready" ...
	I1105 18:58:12.823418   58421 node_ready.go:49] node "pause-616842" has status "Ready":"True"
	I1105 18:58:12.823446   58421 node_ready.go:38] duration metric: took 3.502143ms for node "pause-616842" to be "Ready" ...
	I1105 18:58:12.823458   58421 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:58:12.828668   58421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gwz48" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.148492   58421 pod_ready.go:93] pod "coredns-7c65d6cfc9-gwz48" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:13.148520   58421 pod_ready.go:82] duration metric: took 319.827492ms for pod "coredns-7c65d6cfc9-gwz48" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.148531   58421 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.547838   58421 pod_ready.go:93] pod "etcd-pause-616842" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:13.547864   58421 pod_ready.go:82] duration metric: took 399.325971ms for pod "etcd-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.547876   58421 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.947091   58421 pod_ready.go:93] pod "kube-apiserver-pause-616842" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:13.947116   58421 pod_ready.go:82] duration metric: took 399.231462ms for pod "kube-apiserver-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:13.947130   58421 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:14.346946   58421 pod_ready.go:93] pod "kube-controller-manager-pause-616842" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:14.346996   58421 pod_ready.go:82] duration metric: took 399.855267ms for pod "kube-controller-manager-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:14.347012   58421 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mgld6" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:14.747163   58421 pod_ready.go:93] pod "kube-proxy-mgld6" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:14.747193   58421 pod_ready.go:82] duration metric: took 400.172502ms for pod "kube-proxy-mgld6" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:14.747207   58421 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:15.147608   58421 pod_ready.go:93] pod "kube-scheduler-pause-616842" in "kube-system" namespace has status "Ready":"True"
	I1105 18:58:15.147637   58421 pod_ready.go:82] duration metric: took 400.4217ms for pod "kube-scheduler-pause-616842" in "kube-system" namespace to be "Ready" ...
	I1105 18:58:15.147648   58421 pod_ready.go:39] duration metric: took 2.324179253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:58:15.147663   58421 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:58:15.147718   58421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:58:15.168745   58421 api_server.go:72] duration metric: took 2.579797777s to wait for apiserver process to appear ...
	I1105 18:58:15.168770   58421 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:58:15.168793   58421 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I1105 18:58:15.174177   58421 api_server.go:279] https://192.168.39.64:8443/healthz returned 200:
	ok
	I1105 18:58:15.175515   58421 api_server.go:141] control plane version: v1.31.2
	I1105 18:58:15.175542   58421 api_server.go:131] duration metric: took 6.764268ms to wait for apiserver health ...
	I1105 18:58:15.175552   58421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:58:15.349248   58421 system_pods.go:59] 6 kube-system pods found
	I1105 18:58:15.349278   58421 system_pods.go:61] "coredns-7c65d6cfc9-gwz48" [3cc42000-c8d8-452e-bc62-746d6be5a2cd] Running
	I1105 18:58:15.349282   58421 system_pods.go:61] "etcd-pause-616842" [6dcc846c-1784-482c-a494-ecf982fabbc9] Running
	I1105 18:58:15.349286   58421 system_pods.go:61] "kube-apiserver-pause-616842" [c1eb3f28-e7ea-4d1c-99ae-697450596e05] Running
	I1105 18:58:15.349290   58421 system_pods.go:61] "kube-controller-manager-pause-616842" [64e7490f-32fe-45fa-8954-38e81e9d70d0] Running
	I1105 18:58:15.349293   58421 system_pods.go:61] "kube-proxy-mgld6" [87a4c6d0-e674-4674-9c7b-0f859104617f] Running
	I1105 18:58:15.349296   58421 system_pods.go:61] "kube-scheduler-pause-616842" [b1228ca0-79cf-4410-9d06-14fb16656d70] Running
	I1105 18:58:15.349304   58421 system_pods.go:74] duration metric: took 173.739901ms to wait for pod list to return data ...
	I1105 18:58:15.349313   58421 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:58:15.547600   58421 default_sa.go:45] found service account: "default"
	I1105 18:58:15.547630   58421 default_sa.go:55] duration metric: took 198.303294ms for default service account to be created ...
	I1105 18:58:15.547639   58421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:58:15.749277   58421 system_pods.go:86] 6 kube-system pods found
	I1105 18:58:15.749304   58421 system_pods.go:89] "coredns-7c65d6cfc9-gwz48" [3cc42000-c8d8-452e-bc62-746d6be5a2cd] Running
	I1105 18:58:15.749311   58421 system_pods.go:89] "etcd-pause-616842" [6dcc846c-1784-482c-a494-ecf982fabbc9] Running
	I1105 18:58:15.749316   58421 system_pods.go:89] "kube-apiserver-pause-616842" [c1eb3f28-e7ea-4d1c-99ae-697450596e05] Running
	I1105 18:58:15.749320   58421 system_pods.go:89] "kube-controller-manager-pause-616842" [64e7490f-32fe-45fa-8954-38e81e9d70d0] Running
	I1105 18:58:15.749323   58421 system_pods.go:89] "kube-proxy-mgld6" [87a4c6d0-e674-4674-9c7b-0f859104617f] Running
	I1105 18:58:15.749330   58421 system_pods.go:89] "kube-scheduler-pause-616842" [b1228ca0-79cf-4410-9d06-14fb16656d70] Running
	I1105 18:58:15.749337   58421 system_pods.go:126] duration metric: took 201.693086ms to wait for k8s-apps to be running ...
	I1105 18:58:15.749346   58421 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:58:15.749388   58421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:58:15.766238   58421 system_svc.go:56] duration metric: took 16.881054ms WaitForService to wait for kubelet
	I1105 18:58:15.766270   58421 kubeadm.go:582] duration metric: took 3.177327507s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:58:15.766292   58421 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:58:15.948036   58421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 18:58:15.948059   58421 node_conditions.go:123] node cpu capacity is 2
	I1105 18:58:15.948069   58421 node_conditions.go:105] duration metric: took 181.772439ms to run NodePressure ...
	I1105 18:58:15.948080   58421 start.go:241] waiting for startup goroutines ...
	I1105 18:58:15.948086   58421 start.go:246] waiting for cluster config update ...
	I1105 18:58:15.948093   58421 start.go:255] writing updated cluster config ...
	I1105 18:58:15.948389   58421 ssh_runner.go:195] Run: rm -f paused
	I1105 18:58:16.010350   58421 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 18:58:16.012936   58421 out.go:177] * Done! kubectl is now configured to use "pause-616842" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.831945884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833098831905868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c305c768-74bf-4174-8616-48aa7fec67b9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.832873691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c73e512-5290-4164-b865-df542f3c1f42 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.832965767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c73e512-5290-4164-b865-df542f3c1f42 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.833335960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833075071584481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833075049314478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833075012083759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833075015947023,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833069965343705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833064961335368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730833052527701202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730833051794978477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730833051787751435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730833051779088256,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833051749960577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730833051754711062,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c73e512-5290-4164-b865-df542f3c1f42 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.898538834Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75129892-885e-42e2-9b79-d201c14d8a7e name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.898657380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75129892-885e-42e2-9b79-d201c14d8a7e name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.900035574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e350d75-afdb-4062-ba64-e165dd018f82 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.900636213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833098900603980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e350d75-afdb-4062-ba64-e165dd018f82 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.901261356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36ca0822-dc61-4b18-aa43-de01240e9492 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.901353115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36ca0822-dc61-4b18-aa43-de01240e9492 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.901688173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833075071584481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833075049314478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833075012083759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833075015947023,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833069965343705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833064961335368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730833052527701202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730833051794978477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730833051787751435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730833051779088256,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833051749960577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730833051754711062,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36ca0822-dc61-4b18-aa43-de01240e9492 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.953179902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f68683d8-2d07-4d04-9af5-4837e0f92a7f name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.953290748Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f68683d8-2d07-4d04-9af5-4837e0f92a7f name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.954591807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d20f0636-6cad-4afd-a18f-3e9eb7fdd913 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.955172634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833098955139015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d20f0636-6cad-4afd-a18f-3e9eb7fdd913 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.955905464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea3ef811-f031-4ee6-a0ca-4449d5f9cbfd name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.955990916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea3ef811-f031-4ee6-a0ca-4449d5f9cbfd name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:18 pause-616842 crio[2353]: time="2024-11-05 18:58:18.956323135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833075071584481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833075049314478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833075012083759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833075015947023,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833069965343705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833064961335368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730833052527701202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730833051794978477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730833051787751435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730833051779088256,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833051749960577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730833051754711062,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea3ef811-f031-4ee6-a0ca-4449d5f9cbfd name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:19 pause-616842 crio[2353]: time="2024-11-05 18:58:19.007776141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e835b27a-a5f8-46b2-afa9-0ad93244633b name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:19 pause-616842 crio[2353]: time="2024-11-05 18:58:19.007934496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e835b27a-a5f8-46b2-afa9-0ad93244633b name=/runtime.v1.RuntimeService/Version
	Nov 05 18:58:19 pause-616842 crio[2353]: time="2024-11-05 18:58:19.015181445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcdbefea-3430-4f65-abac-2ff828ae9361 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:19 pause-616842 crio[2353]: time="2024-11-05 18:58:19.015752699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833099015713982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcdbefea-3430-4f65-abac-2ff828ae9361 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 18:58:19 pause-616842 crio[2353]: time="2024-11-05 18:58:19.017011249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3d77215-a846-4b3b-a680-5225dec662c5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:19 pause-616842 crio[2353]: time="2024-11-05 18:58:19.017206358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3d77215-a846-4b3b-a680-5225dec662c5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 18:58:19 pause-616842 crio[2353]: time="2024-11-05 18:58:19.017661597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833075071584481,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833075049314478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833075012083759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833075015947023,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833069965343705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833064961335368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17,PodSandboxId:d572250113fb811ebdce7ced18302118fa1a292b6eb28479ad27a817ba9c71fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730833052527701202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gwz48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc42000-c8d8-452e-bc62-746d6be5a2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9,PodSandboxId:494e41fb0b968a529e3f36a03130ac54b0e62031cfce1ab6895ddec04aa1769a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730833051794978477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-mgld6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4c6d0-e674-4674-9c7b-0f859104617f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257,PodSandboxId:a6b3182b2a5d7c57d8d0aa0fbd404e2dfa735d868d70bb1139483838191c7037,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730833051787751435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-616842,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3871ba72dbd0aaadef86788757049bfe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d,PodSandboxId:20966ef5e4afb7f3f2c3791480b06a6c319190b90781845d161ce63662880ab7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730833051779088256,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-616842,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: e816343b7e3d162bf9c42ed292dadb66,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9,PodSandboxId:f8b880d174a62bde058324004f6bf11276897bafdc675642ec781fd0e1fb50d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833051749960577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 299229899d1c4f4e5705137cbf797041,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283,PodSandboxId:b472b31f05451a7cc8c71da195f20b57f66233dd9daf22e3d394cb24e5945878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730833051754711062,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-616842,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29b842f9012c691fa0c5e82891687772,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3d77215-a846-4b3b-a680-5225dec662c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39f18b945c536       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   24 seconds ago      Running             kube-controller-manager   2                   b472b31f05451       kube-controller-manager-pause-616842
	bde83f98d1dc5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   24 seconds ago      Running             kube-scheduler            2                   20966ef5e4afb       kube-scheduler-pause-616842
	d16f614ae991b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   24 seconds ago      Running             etcd                      2                   a6b3182b2a5d7       etcd-pause-616842
	a5a7324a56a62       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   24 seconds ago      Running             kube-apiserver            2                   f8b880d174a62       kube-apiserver-pause-616842
	5bd878d7b1093       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   29 seconds ago      Running             coredns                   2                   d572250113fb8       coredns-7c65d6cfc9-gwz48
	f196c53aa24e7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   34 seconds ago      Running             kube-proxy                2                   494e41fb0b968       kube-proxy-mgld6
	3a1c19c7f2307       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   46 seconds ago      Exited              coredns                   1                   d572250113fb8       coredns-7c65d6cfc9-gwz48
	26a30cd22fcda       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   47 seconds ago      Exited              kube-proxy                1                   494e41fb0b968       kube-proxy-mgld6
	13c0a0d364adb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   47 seconds ago      Exited              etcd                      1                   a6b3182b2a5d7       etcd-pause-616842
	7d7ecae7e52fd       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   47 seconds ago      Exited              kube-scheduler            1                   20966ef5e4afb       kube-scheduler-pause-616842
	d4352539586e6       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   47 seconds ago      Exited              kube-controller-manager   1                   b472b31f05451       kube-controller-manager-pause-616842
	0731f779ee1ae       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   47 seconds ago      Exited              kube-apiserver            1                   f8b880d174a62       kube-apiserver-pause-616842
	
	
	==> coredns [3a1c19c7f2307ffc1b4a9c2c37ce05f996ae8105322f3b7f38ef36bc80d7ce17] <==
	
	
	==> coredns [5bd878d7b1093a7be40bf258269c89db22454a514d9b2a1e35663d52f7966fe9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56928->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56928->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56936->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56936->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56940->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:56940->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40869 - 4177 "HINFO IN 8507423948448639189.7213269714494655466. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011997848s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-616842
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-616842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=pause-616842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T18_56_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:56:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-616842
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:58:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:57:58 +0000   Tue, 05 Nov 2024 18:56:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:57:58 +0000   Tue, 05 Nov 2024 18:56:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:57:58 +0000   Tue, 05 Nov 2024 18:56:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:57:58 +0000   Tue, 05 Nov 2024 18:56:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    pause-616842
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ada336593ef44f384ea611980f2f7d9
	  System UUID:                1ada3365-93ef-44f3-84ea-611980f2f7d9
	  Boot ID:                    90aa5978-2430-49a7-9634-5f6665b572f8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gwz48                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m5s
	  kube-system                 etcd-pause-616842                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m10s
	  kube-system                 kube-apiserver-pause-616842             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-controller-manager-pause-616842    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-mgld6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-scheduler-pause-616842             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m3s                   kube-proxy       
	  Normal  Starting                 21s                    kube-proxy       
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node pause-616842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node pause-616842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x7 over 2m16s)  kubelet          Node pause-616842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m10s                  kubelet          Node pause-616842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s                  kubelet          Node pause-616842 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m10s                  kubelet          Node pause-616842 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m10s                  kubelet          Node pause-616842 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m6s                   node-controller  Node pause-616842 event: Registered Node pause-616842 in Controller
	  Normal  Starting                 25s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)      kubelet          Node pause-616842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)      kubelet          Node pause-616842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)      kubelet          Node pause-616842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node pause-616842 event: Registered Node pause-616842 in Controller
	
	
	==> dmesg <==
	[  +9.648610] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059874] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068569] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.170680] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.143856] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.292016] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.835513] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[Nov 5 18:56] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.063717] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.012474] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +0.092219] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.267582] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.114212] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.529563] kauditd_printk_skb: 88 callbacks suppressed
	[Nov 5 18:57] systemd-fstab-generator[2276]: Ignoring "noauto" option for root device
	[  +0.130724] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +0.157644] systemd-fstab-generator[2302]: Ignoring "noauto" option for root device
	[  +0.133667] systemd-fstab-generator[2314]: Ignoring "noauto" option for root device
	[  +0.276448] systemd-fstab-generator[2342]: Ignoring "noauto" option for root device
	[  +0.666999] systemd-fstab-generator[2464]: Ignoring "noauto" option for root device
	[ +12.373033] kauditd_printk_skb: 198 callbacks suppressed
	[ +10.828178] systemd-fstab-generator[3347]: Ignoring "noauto" option for root device
	[  +0.740184] kauditd_printk_skb: 24 callbacks suppressed
	[Nov 5 18:58] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.972721] systemd-fstab-generator[3700]: Ignoring "noauto" option for root device
	
	
	==> etcd [13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257] <==
	{"level":"info","ts":"2024-11-05T18:57:32.597880Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-11-05T18:57:32.646271Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","commit-index":474}
	{"level":"info","ts":"2024-11-05T18:57:32.646402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=()"}
	{"level":"info","ts":"2024-11-05T18:57:32.646468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became follower at term 2"}
	{"level":"info","ts":"2024-11-05T18:57:32.646488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 7dcc3547d111063c [peers: [], term: 2, commit: 474, applied: 0, lastindex: 474, lastterm: 2]"}
	{"level":"warn","ts":"2024-11-05T18:57:32.653039Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-11-05T18:57:32.673752Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":447}
	{"level":"info","ts":"2024-11-05T18:57:32.694949Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-11-05T18:57:32.715710Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"7dcc3547d111063c","timeout":"7s"}
	{"level":"info","ts":"2024-11-05T18:57:32.716063Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"7dcc3547d111063c"}
	{"level":"info","ts":"2024-11-05T18:57:32.716122Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"7dcc3547d111063c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-11-05T18:57:32.716644Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:57:32.726228Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-11-05T18:57:32.727421Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7dcc3547d111063c","initial-advertise-peer-urls":["https://192.168.39.64:2380"],"listen-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-11-05T18:57:32.727495Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-11-05T18:57:32.726566Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-11-05T18:57:32.726738Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T18:57:32.727586Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T18:57:32.727596Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T18:57:32.736811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=(9064678732556469820)"}
	{"level":"info","ts":"2024-11-05T18:57:32.727051Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-11-05T18:57:32.739877Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-11-05T18:57:32.739722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","added-peer-id":"7dcc3547d111063c","added-peer-peer-urls":["https://192.168.39.64:2380"]}
	{"level":"info","ts":"2024-11-05T18:57:32.740083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T18:57:32.740172Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> etcd [d16f614ae991b4db4951c156b2cadb57382de035e10c1f30926c69e0437363ee] <==
	{"level":"info","ts":"2024-11-05T18:57:56.430898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c is starting a new election at term 2"}
	{"level":"info","ts":"2024-11-05T18:57:56.430955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-11-05T18:57:56.430989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgPreVoteResp from 7dcc3547d111063c at term 2"}
	{"level":"info","ts":"2024-11-05T18:57:56.431011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became candidate at term 3"}
	{"level":"info","ts":"2024-11-05T18:57:56.431017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgVoteResp from 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-11-05T18:57:56.431026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became leader at term 3"}
	{"level":"info","ts":"2024-11-05T18:57:56.431033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7dcc3547d111063c elected leader 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-11-05T18:57:56.435075Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7dcc3547d111063c","local-member-attributes":"{Name:pause-616842 ClientURLs:[https://192.168.39.64:2379]}","request-path":"/0/members/7dcc3547d111063c/attributes","cluster-id":"c3619ef1effce12d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T18:57:56.435282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:57:56.439902Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:57:56.442183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:57:56.446502Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T18:57:56.446540Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-05T18:57:56.450931Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:57:56.456210Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-05T18:57:56.463573Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.64:2379"}
	{"level":"warn","ts":"2024-11-05T18:57:58.620941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.953159ms","expected-duration":"100ms","prefix":"","request":"header:<ID:449395681120755084 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-616842.18052782d32ee607\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-616842.18052782d32ee607\" value_size:584 lease:449395681120755075 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-11-05T18:57:58.621128Z","caller":"traceutil/trace.go:171","msg":"trace[883510494] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:486; }","duration":"160.108883ms","start":"2024-11-05T18:57:58.461009Z","end":"2024-11-05T18:57:58.621118Z","steps":["trace[883510494] 'read index received'  (duration: 110.094277ms)","trace[883510494] 'applied index is now lower than readState.Index'  (duration: 50.013849ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T18:57:58.621206Z","caller":"traceutil/trace.go:171","msg":"trace[2079350925] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"353.613104ms","start":"2024-11-05T18:57:58.267587Z","end":"2024-11-05T18:57:58.621200Z","steps":["trace[2079350925] 'process raft request'  (duration: 353.483507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:57:58.621245Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:57:58.267572Z","time spent":"353.650675ms","remote":"127.0.0.1:38056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":533,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-616842.18052783aaf07d24\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-616842.18052783aaf07d24\" value_size:461 lease:449395681120755082 >> failure:<>"}
	{"level":"info","ts":"2024-11-05T18:57:58.621273Z","caller":"traceutil/trace.go:171","msg":"trace[1531555745] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"353.902961ms","start":"2024-11-05T18:57:58.267247Z","end":"2024-11-05T18:57:58.621150Z","steps":["trace[1531555745] 'process raft request'  (duration: 20.190726ms)","trace[1531555745] 'compare'  (duration: 332.821976ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T18:57:58.621524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.536495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T18:57:58.621590Z","caller":"traceutil/trace.go:171","msg":"trace[2099495959] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:456; }","duration":"160.627309ms","start":"2024-11-05T18:57:58.460953Z","end":"2024-11-05T18:57:58.621581Z","steps":["trace[2099495959] 'agreement among raft nodes before linearized reading'  (duration: 160.475477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:57:58.622402Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:57:58.267235Z","time spent":"354.184229ms","remote":"127.0.0.1:37630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":656,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-616842.18052782d32ee607\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-616842.18052782d32ee607\" value_size:584 lease:449395681120755075 >> failure:<>"}
	{"level":"info","ts":"2024-11-05T18:58:10.932600Z","caller":"traceutil/trace.go:171","msg":"trace[1508301133] transaction","detail":"{read_only:false; response_revision:518; number_of_response:1; }","duration":"129.445555ms","start":"2024-11-05T18:58:10.803118Z","end":"2024-11-05T18:58:10.932564Z","steps":["trace[1508301133] 'process raft request'  (duration: 129.323697ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:58:19 up 2 min,  0 users,  load average: 1.06, 0.56, 0.22
	Linux pause-616842 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9] <==
	I1105 18:57:32.288428       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:57:32.961600       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1105 18:57:32.981921       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1105 18:57:32.986526       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1105 18:57:32.987246       1 instance.go:232] Using reconciler: lease
	I1105 18:57:32.986447       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W1105 18:57:33.082022       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:45592->127.0.0.1:2379: read: connection reset by peer"
	W1105 18:57:33.082196       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:45576->127.0.0.1:2379: read: connection reset by peer"
	W1105 18:57:33.082343       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:45582->127.0.0.1:2379: read: connection reset by peer"
	W1105 18:57:34.082512       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:34.082910       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:34.083074       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:35.591080       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:35.677411       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:35.893137       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:37.986550       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:38.177507       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:38.674184       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:41.985521       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:42.683702       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:42.992240       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:47.869642       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:48.066333       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 18:57:49.268097       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1105 18:57:52.989430       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a5a7324a56a6218103b698d2e2e29bccbb76402e3879bb2c16892df84bc05629] <==
	I1105 18:57:58.079096       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:57:58.079201       1 policy_source.go:224] refreshing policies
	I1105 18:57:58.079955       1 shared_informer.go:320] Caches are synced for configmaps
	I1105 18:57:58.127102       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 18:57:58.139625       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1105 18:57:58.146937       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1105 18:57:58.147175       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 18:57:58.149359       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1105 18:57:58.149408       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1105 18:57:58.149618       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1105 18:57:58.158430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1105 18:57:58.158528       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1105 18:57:58.158611       1 aggregator.go:171] initial CRD sync complete...
	I1105 18:57:58.158635       1 autoregister_controller.go:144] Starting autoregister controller
	I1105 18:57:58.158640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1105 18:57:58.158645       1 cache.go:39] Caches are synced for autoregister controller
	I1105 18:57:58.184978       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1105 18:57:58.945693       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1105 18:57:59.350156       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1105 18:57:59.362064       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1105 18:57:59.401876       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1105 18:57:59.429130       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1105 18:57:59.436057       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1105 18:58:01.619760       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:58:01.683485       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [39f18b945c536d26a7ed29fcb9f7e4e4c262a5898bcbe45dac4ad837933a5ed4] <==
	I1105 18:58:01.565271       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1105 18:58:01.568591       1 shared_informer.go:320] Caches are synced for daemon sets
	I1105 18:58:01.570709       1 shared_informer.go:320] Caches are synced for ephemeral
	I1105 18:58:01.576722       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1105 18:58:01.576871       1 shared_informer.go:320] Caches are synced for stateful set
	I1105 18:58:01.584905       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1105 18:58:01.585007       1 shared_informer.go:320] Caches are synced for GC
	I1105 18:58:01.585064       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1105 18:58:01.585107       1 shared_informer.go:320] Caches are synced for job
	I1105 18:58:01.596034       1 shared_informer.go:320] Caches are synced for resource quota
	I1105 18:58:01.610619       1 shared_informer.go:320] Caches are synced for endpoint
	I1105 18:58:01.612453       1 shared_informer.go:320] Caches are synced for deployment
	I1105 18:58:01.618615       1 shared_informer.go:320] Caches are synced for disruption
	I1105 18:58:01.627308       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1105 18:58:01.627723       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1105 18:58:01.627899       1 shared_informer.go:320] Caches are synced for HPA
	I1105 18:58:01.627892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.034µs"
	I1105 18:58:01.629192       1 shared_informer.go:320] Caches are synced for persistent volume
	I1105 18:58:01.629468       1 shared_informer.go:320] Caches are synced for attach detach
	I1105 18:58:02.010200       1 shared_informer.go:320] Caches are synced for garbage collector
	I1105 18:58:02.076912       1 shared_informer.go:320] Caches are synced for garbage collector
	I1105 18:58:02.076954       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1105 18:58:02.395575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.69058ms"
	I1105 18:58:02.413015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.347915ms"
	I1105 18:58:02.414617       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.069µs"
	
	
	==> kube-controller-manager [d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283] <==
	I1105 18:57:33.269128       1 serving.go:386] Generated self-signed cert in-memory
	I1105 18:57:33.473481       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1105 18:57:33.473530       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:57:33.475452       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1105 18:57:33.476103       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1105 18:57:33.476204       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1105 18:57:33.476257       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [26a30cd22fcdaf1aca3825e4dad9294024c6773161c1f393070369e8d58bbee9] <==
	
	
	==> kube-proxy [f196c53aa24e7a6a690cddbb44143580ee74e9ef1e8db61b369bff37ffdc176d] <==
	 >
	E1105 18:57:45.107501       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:57:53.996376       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-616842\": dial tcp 192.168.39.64:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.64:38128->192.168.39.64:8443: read: connection reset by peer"
	E1105 18:57:55.173062       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-616842\": dial tcp 192.168.39.64:8443: connect: connection refused"
	I1105 18:57:58.142622       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.64"]
	E1105 18:57:58.142794       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:57:58.249201       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:57:58.249240       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:57:58.249296       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:57:58.252471       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:57:58.252773       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:57:58.252814       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:57:58.256420       1 config.go:199] "Starting service config controller"
	I1105 18:57:58.256478       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:57:58.256533       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:57:58.256558       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:57:58.257826       1 config.go:328] "Starting node config controller"
	I1105 18:57:58.257902       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:57:58.357670       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:57:58.357731       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:57:58.357993       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d] <==
	I1105 18:57:33.742154       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [bde83f98d1dc5f3b18e75f0b0368342736ec1e173046eda71a1fae45ba9c2122] <==
	I1105 18:57:56.184073       1 serving.go:386] Generated self-signed cert in-memory
	W1105 18:57:58.046965       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1105 18:57:58.047094       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1105 18:57:58.047124       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1105 18:57:58.047194       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1105 18:57:58.134795       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1105 18:57:58.137248       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:57:58.145182       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1105 18:57:58.147463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1105 18:57:58.153166       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 18:57:58.147483       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1105 18:57:58.253949       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.940417    3354 kubelet_node_status.go:72] "Attempting to register node" node="pause-616842"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: E1105 18:57:54.941966    3354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.64:8443: connect: connection refused" node="pause-616842"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.982455    3354 scope.go:117] "RemoveContainer" containerID="13c0a0d364adb0dfdafa24fb910e40c913bab1939598800ba5f808c441477257"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.984465    3354 scope.go:117] "RemoveContainer" containerID="0731f779ee1aea3b69c953b20ad40d8e800dfb15ea4eee4dc6890a43c8264cc9"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.993725    3354 scope.go:117] "RemoveContainer" containerID="7d7ecae7e52fda85fe24e597f1a11a0f0b0dbbcc0bc6bcc3bc170f710b999e5d"
	Nov 05 18:57:54 pause-616842 kubelet[3354]: I1105 18:57:54.993957    3354 scope.go:117] "RemoveContainer" containerID="d4352539586e6d1b759468a3fd6a9105cc66afcd721c35299b99a9a8cf9e3283"
	Nov 05 18:57:55 pause-616842 kubelet[3354]: E1105 18:57:55.144204    3354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-616842?timeout=10s\": dial tcp 192.168.39.64:8443: connect: connection refused" interval="800ms"
	Nov 05 18:57:55 pause-616842 kubelet[3354]: W1105 18:57:55.332994    3354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-616842&limit=500&resourceVersion=0": dial tcp 192.168.39.64:8443: connect: connection refused
	Nov 05 18:57:55 pause-616842 kubelet[3354]: E1105 18:57:55.333128    3354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-616842&limit=500&resourceVersion=0\": dial tcp 192.168.39.64:8443: connect: connection refused" logger="UnhandledError"
	Nov 05 18:57:55 pause-616842 kubelet[3354]: I1105 18:57:55.343791    3354 kubelet_node_status.go:72] "Attempting to register node" node="pause-616842"
	Nov 05 18:57:55 pause-616842 kubelet[3354]: E1105 18:57:55.344699    3354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.64:8443: connect: connection refused" node="pause-616842"
	Nov 05 18:57:56 pause-616842 kubelet[3354]: I1105 18:57:56.146339    3354 kubelet_node_status.go:72] "Attempting to register node" node="pause-616842"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.193703    3354 kubelet_node_status.go:111] "Node was previously registered" node="pause-616842"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.194420    3354 kubelet_node_status.go:75] "Successfully registered node" node="pause-616842"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.194597    3354 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.196462    3354 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.519810    3354 apiserver.go:52] "Watching apiserver"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.538924    3354 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.621785    3354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87a4c6d0-e674-4674-9c7b-0f859104617f-lib-modules\") pod \"kube-proxy-mgld6\" (UID: \"87a4c6d0-e674-4674-9c7b-0f859104617f\") " pod="kube-system/kube-proxy-mgld6"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: I1105 18:57:58.621867    3354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87a4c6d0-e674-4674-9c7b-0f859104617f-xtables-lock\") pod \"kube-proxy-mgld6\" (UID: \"87a4c6d0-e674-4674-9c7b-0f859104617f\") " pod="kube-system/kube-proxy-mgld6"
	Nov 05 18:57:58 pause-616842 kubelet[3354]: E1105 18:57:58.648345    3354 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-616842\" already exists" pod="kube-system/kube-controller-manager-pause-616842"
	Nov 05 18:58:04 pause-616842 kubelet[3354]: E1105 18:58:04.640676    3354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833084640392496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:58:04 pause-616842 kubelet[3354]: E1105 18:58:04.640717    3354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833084640392496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:58:14 pause-616842 kubelet[3354]: E1105 18:58:14.643411    3354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833094642457105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:58:14 pause-616842 kubelet[3354]: E1105 18:58:14.643452    3354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730833094642457105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-616842 -n pause-616842
helpers_test.go:261: (dbg) Run:  kubectl --context pause-616842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (90.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (316.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-567666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-567666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m16.523054343s)

                                                
                                                
-- stdout --
	* [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 19:00:34.870057   66674 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:00:34.870161   66674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:00:34.870167   66674 out.go:358] Setting ErrFile to fd 2...
	I1105 19:00:34.870172   66674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:00:34.870341   66674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:00:34.870909   66674 out.go:352] Setting JSON to false
	I1105 19:00:34.872005   66674 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6177,"bootTime":1730827058,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:00:34.872103   66674 start.go:139] virtualization: kvm guest
	I1105 19:00:34.874504   66674 out.go:177] * [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:00:34.875888   66674 notify.go:220] Checking for updates...
	I1105 19:00:34.875905   66674 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:00:34.877239   66674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:00:34.878614   66674 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:00:34.879984   66674 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:00:34.881337   66674 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:00:34.882680   66674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:00:34.884693   66674 config.go:182] Loaded profile config "bridge-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:00:34.884795   66674 config.go:182] Loaded profile config "enable-default-cni-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:00:34.884902   66674 config.go:182] Loaded profile config "flannel-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:00:34.885016   66674 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:00:34.922494   66674 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 19:00:34.923774   66674 start.go:297] selected driver: kvm2
	I1105 19:00:34.923793   66674 start.go:901] validating driver "kvm2" against <nil>
	I1105 19:00:34.923809   66674 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:00:34.924527   66674 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:00:34.924619   66674 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:00:34.940799   66674 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:00:34.940853   66674 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 19:00:34.941128   66674 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:00:34.941164   66674 cni.go:84] Creating CNI manager for ""
	I1105 19:00:34.941217   66674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:00:34.941228   66674 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 19:00:34.941290   66674 start.go:340] cluster config:
	{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:00:34.941403   66674 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:00:34.943237   66674 out.go:177] * Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	I1105 19:00:34.944426   66674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:00:34.944470   66674 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 19:00:34.944479   66674 cache.go:56] Caching tarball of preloaded images
	I1105 19:00:34.944550   66674 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:00:34.944563   66674 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 19:00:34.944643   66674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:00:34.944660   66674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json: {Name:mk16a6dc95751e91c4c614263c33953c37ee4a30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:00:34.944809   66674 start.go:360] acquireMachinesLock for old-k8s-version-567666: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:01:18.416021   66674 start.go:364] duration metric: took 43.471174808s to acquireMachinesLock for "old-k8s-version-567666"
	I1105 19:01:18.416117   66674 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:01:18.416215   66674 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 19:01:18.417721   66674 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 19:01:18.417899   66674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:01:18.417945   66674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:01:18.437081   66674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I1105 19:01:18.437563   66674 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:01:18.449265   66674 main.go:141] libmachine: Using API Version  1
	I1105 19:01:18.449295   66674 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:01:18.449674   66674 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:01:18.449865   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:01:18.450000   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:01:18.450154   66674 start.go:159] libmachine.API.Create for "old-k8s-version-567666" (driver="kvm2")
	I1105 19:01:18.450179   66674 client.go:168] LocalClient.Create starting
	I1105 19:01:18.450215   66674 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 19:01:18.450255   66674 main.go:141] libmachine: Decoding PEM data...
	I1105 19:01:18.450276   66674 main.go:141] libmachine: Parsing certificate...
	I1105 19:01:18.450347   66674 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 19:01:18.450381   66674 main.go:141] libmachine: Decoding PEM data...
	I1105 19:01:18.450397   66674 main.go:141] libmachine: Parsing certificate...
	I1105 19:01:18.450420   66674 main.go:141] libmachine: Running pre-create checks...
	I1105 19:01:18.450433   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .PreCreateCheck
	I1105 19:01:18.450815   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:01:18.451325   66674 main.go:141] libmachine: Creating machine...
	I1105 19:01:18.451529   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .Create
	I1105 19:01:18.451706   66674 main.go:141] libmachine: (old-k8s-version-567666) Creating KVM machine...
	I1105 19:01:18.452937   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found existing default KVM network
	I1105 19:01:18.453970   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:18.453810   68436 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:5b:c0} reservation:<nil>}
	I1105 19:01:18.454947   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:18.454860   68436 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b0:f0:f9} reservation:<nil>}
	I1105 19:01:18.456074   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:18.455976   68436 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002c4250}
	I1105 19:01:18.456101   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | created network xml: 
	I1105 19:01:18.456114   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | <network>
	I1105 19:01:18.456126   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG |   <name>mk-old-k8s-version-567666</name>
	I1105 19:01:18.456140   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG |   <dns enable='no'/>
	I1105 19:01:18.456150   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG |   
	I1105 19:01:18.456160   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1105 19:01:18.456168   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG |     <dhcp>
	I1105 19:01:18.456500   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1105 19:01:18.456530   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG |     </dhcp>
	I1105 19:01:18.456538   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG |   </ip>
	I1105 19:01:18.456546   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG |   
	I1105 19:01:18.456560   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | </network>
	I1105 19:01:18.456576   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | 
	I1105 19:01:18.462752   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | trying to create private KVM network mk-old-k8s-version-567666 192.168.61.0/24...
	I1105 19:01:18.551187   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | private KVM network mk-old-k8s-version-567666 192.168.61.0/24 created
	I1105 19:01:18.551220   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:18.551170   68436 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:01:18.551272   66674 main.go:141] libmachine: (old-k8s-version-567666) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666 ...
	I1105 19:01:18.551316   66674 main.go:141] libmachine: (old-k8s-version-567666) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 19:01:18.551354   66674 main.go:141] libmachine: (old-k8s-version-567666) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 19:01:18.830806   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:18.830640   68436 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa...
	I1105 19:01:19.176371   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:19.176244   68436 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/old-k8s-version-567666.rawdisk...
	I1105 19:01:19.176393   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Writing magic tar header
	I1105 19:01:19.176415   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Writing SSH key tar header
	I1105 19:01:19.176481   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:19.176436   68436 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666 ...
	I1105 19:01:19.176602   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666
	I1105 19:01:19.176656   66674 main.go:141] libmachine: (old-k8s-version-567666) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666 (perms=drwx------)
	I1105 19:01:19.176669   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 19:01:19.176684   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:01:19.176697   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 19:01:19.176714   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 19:01:19.176722   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Checking permissions on dir: /home/jenkins
	I1105 19:01:19.176733   66674 main.go:141] libmachine: (old-k8s-version-567666) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 19:01:19.176741   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Checking permissions on dir: /home
	I1105 19:01:19.176751   66674 main.go:141] libmachine: (old-k8s-version-567666) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 19:01:19.176763   66674 main.go:141] libmachine: (old-k8s-version-567666) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 19:01:19.176773   66674 main.go:141] libmachine: (old-k8s-version-567666) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 19:01:19.176783   66674 main.go:141] libmachine: (old-k8s-version-567666) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 19:01:19.176790   66674 main.go:141] libmachine: (old-k8s-version-567666) Creating domain...
	I1105 19:01:19.176798   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Skipping /home - not owner
	I1105 19:01:19.177997   66674 main.go:141] libmachine: (old-k8s-version-567666) define libvirt domain using xml: 
	I1105 19:01:19.178027   66674 main.go:141] libmachine: (old-k8s-version-567666) <domain type='kvm'>
	I1105 19:01:19.178038   66674 main.go:141] libmachine: (old-k8s-version-567666)   <name>old-k8s-version-567666</name>
	I1105 19:01:19.178048   66674 main.go:141] libmachine: (old-k8s-version-567666)   <memory unit='MiB'>2200</memory>
	I1105 19:01:19.178057   66674 main.go:141] libmachine: (old-k8s-version-567666)   <vcpu>2</vcpu>
	I1105 19:01:19.178070   66674 main.go:141] libmachine: (old-k8s-version-567666)   <features>
	I1105 19:01:19.178085   66674 main.go:141] libmachine: (old-k8s-version-567666)     <acpi/>
	I1105 19:01:19.178093   66674 main.go:141] libmachine: (old-k8s-version-567666)     <apic/>
	I1105 19:01:19.178106   66674 main.go:141] libmachine: (old-k8s-version-567666)     <pae/>
	I1105 19:01:19.178117   66674 main.go:141] libmachine: (old-k8s-version-567666)     
	I1105 19:01:19.178125   66674 main.go:141] libmachine: (old-k8s-version-567666)   </features>
	I1105 19:01:19.178136   66674 main.go:141] libmachine: (old-k8s-version-567666)   <cpu mode='host-passthrough'>
	I1105 19:01:19.178142   66674 main.go:141] libmachine: (old-k8s-version-567666)   
	I1105 19:01:19.178171   66674 main.go:141] libmachine: (old-k8s-version-567666)   </cpu>
	I1105 19:01:19.178189   66674 main.go:141] libmachine: (old-k8s-version-567666)   <os>
	I1105 19:01:19.178201   66674 main.go:141] libmachine: (old-k8s-version-567666)     <type>hvm</type>
	I1105 19:01:19.178212   66674 main.go:141] libmachine: (old-k8s-version-567666)     <boot dev='cdrom'/>
	I1105 19:01:19.178222   66674 main.go:141] libmachine: (old-k8s-version-567666)     <boot dev='hd'/>
	I1105 19:01:19.178240   66674 main.go:141] libmachine: (old-k8s-version-567666)     <bootmenu enable='no'/>
	I1105 19:01:19.178251   66674 main.go:141] libmachine: (old-k8s-version-567666)   </os>
	I1105 19:01:19.178260   66674 main.go:141] libmachine: (old-k8s-version-567666)   <devices>
	I1105 19:01:19.178267   66674 main.go:141] libmachine: (old-k8s-version-567666)     <disk type='file' device='cdrom'>
	I1105 19:01:19.178280   66674 main.go:141] libmachine: (old-k8s-version-567666)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/boot2docker.iso'/>
	I1105 19:01:19.178287   66674 main.go:141] libmachine: (old-k8s-version-567666)       <target dev='hdc' bus='scsi'/>
	I1105 19:01:19.178298   66674 main.go:141] libmachine: (old-k8s-version-567666)       <readonly/>
	I1105 19:01:19.178304   66674 main.go:141] libmachine: (old-k8s-version-567666)     </disk>
	I1105 19:01:19.178311   66674 main.go:141] libmachine: (old-k8s-version-567666)     <disk type='file' device='disk'>
	I1105 19:01:19.178317   66674 main.go:141] libmachine: (old-k8s-version-567666)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 19:01:19.178325   66674 main.go:141] libmachine: (old-k8s-version-567666)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/old-k8s-version-567666.rawdisk'/>
	I1105 19:01:19.178330   66674 main.go:141] libmachine: (old-k8s-version-567666)       <target dev='hda' bus='virtio'/>
	I1105 19:01:19.178335   66674 main.go:141] libmachine: (old-k8s-version-567666)     </disk>
	I1105 19:01:19.178339   66674 main.go:141] libmachine: (old-k8s-version-567666)     <interface type='network'>
	I1105 19:01:19.178345   66674 main.go:141] libmachine: (old-k8s-version-567666)       <source network='mk-old-k8s-version-567666'/>
	I1105 19:01:19.178354   66674 main.go:141] libmachine: (old-k8s-version-567666)       <model type='virtio'/>
	I1105 19:01:19.178363   66674 main.go:141] libmachine: (old-k8s-version-567666)     </interface>
	I1105 19:01:19.178368   66674 main.go:141] libmachine: (old-k8s-version-567666)     <interface type='network'>
	I1105 19:01:19.178373   66674 main.go:141] libmachine: (old-k8s-version-567666)       <source network='default'/>
	I1105 19:01:19.178377   66674 main.go:141] libmachine: (old-k8s-version-567666)       <model type='virtio'/>
	I1105 19:01:19.178382   66674 main.go:141] libmachine: (old-k8s-version-567666)     </interface>
	I1105 19:01:19.178386   66674 main.go:141] libmachine: (old-k8s-version-567666)     <serial type='pty'>
	I1105 19:01:19.178390   66674 main.go:141] libmachine: (old-k8s-version-567666)       <target port='0'/>
	I1105 19:01:19.178394   66674 main.go:141] libmachine: (old-k8s-version-567666)     </serial>
	I1105 19:01:19.178399   66674 main.go:141] libmachine: (old-k8s-version-567666)     <console type='pty'>
	I1105 19:01:19.178404   66674 main.go:141] libmachine: (old-k8s-version-567666)       <target type='serial' port='0'/>
	I1105 19:01:19.178411   66674 main.go:141] libmachine: (old-k8s-version-567666)     </console>
	I1105 19:01:19.178417   66674 main.go:141] libmachine: (old-k8s-version-567666)     <rng model='virtio'>
	I1105 19:01:19.178425   66674 main.go:141] libmachine: (old-k8s-version-567666)       <backend model='random'>/dev/random</backend>
	I1105 19:01:19.178432   66674 main.go:141] libmachine: (old-k8s-version-567666)     </rng>
	I1105 19:01:19.178440   66674 main.go:141] libmachine: (old-k8s-version-567666)     
	I1105 19:01:19.178446   66674 main.go:141] libmachine: (old-k8s-version-567666)     
	I1105 19:01:19.178453   66674 main.go:141] libmachine: (old-k8s-version-567666)   </devices>
	I1105 19:01:19.178459   66674 main.go:141] libmachine: (old-k8s-version-567666) </domain>
	I1105 19:01:19.178468   66674 main.go:141] libmachine: (old-k8s-version-567666) 
	I1105 19:01:19.183619   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:73:f7:ac in network default
	I1105 19:01:19.184311   66674 main.go:141] libmachine: (old-k8s-version-567666) Ensuring networks are active...
	I1105 19:01:19.184330   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:19.185133   66674 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network default is active
	I1105 19:01:19.185490   66674 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network mk-old-k8s-version-567666 is active
	I1105 19:01:19.186100   66674 main.go:141] libmachine: (old-k8s-version-567666) Getting domain xml...
	I1105 19:01:19.186938   66674 main.go:141] libmachine: (old-k8s-version-567666) Creating domain...
	I1105 19:01:20.844473   66674 main.go:141] libmachine: (old-k8s-version-567666) Waiting to get IP...
	I1105 19:01:20.845407   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:20.845955   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:20.845984   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:20.845928   68436 retry.go:31] will retry after 229.361758ms: waiting for machine to come up
	I1105 19:01:21.077214   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:21.077645   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:21.077682   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:21.077610   68436 retry.go:31] will retry after 261.840331ms: waiting for machine to come up
	I1105 19:01:21.341496   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:21.342350   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:21.342376   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:21.342298   68436 retry.go:31] will retry after 377.083629ms: waiting for machine to come up
	I1105 19:01:21.720458   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:21.720998   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:21.721028   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:21.720969   68436 retry.go:31] will retry after 513.534174ms: waiting for machine to come up
	I1105 19:01:22.235907   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:22.236512   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:22.236540   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:22.236464   68436 retry.go:31] will retry after 615.933723ms: waiting for machine to come up
	I1105 19:01:22.854083   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:22.854633   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:22.854661   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:22.854571   68436 retry.go:31] will retry after 738.193537ms: waiting for machine to come up
	I1105 19:01:23.593945   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:23.594424   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:23.594447   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:23.594398   68436 retry.go:31] will retry after 1.175509424s: waiting for machine to come up
	I1105 19:01:24.770901   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:24.771509   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:24.771538   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:24.771456   68436 retry.go:31] will retry after 955.770387ms: waiting for machine to come up
	I1105 19:01:25.728472   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:25.728855   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:25.728884   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:25.728810   68436 retry.go:31] will retry after 1.625781255s: waiting for machine to come up
	I1105 19:01:27.356256   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:27.356789   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:27.356815   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:27.356745   68436 retry.go:31] will retry after 1.48504769s: waiting for machine to come up
	I1105 19:01:28.844022   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:28.844609   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:28.844641   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:28.844552   68436 retry.go:31] will retry after 2.132713108s: waiting for machine to come up
	I1105 19:01:30.978854   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:30.979578   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:30.979602   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:30.979531   68436 retry.go:31] will retry after 3.150091217s: waiting for machine to come up
	I1105 19:01:34.131679   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:34.132286   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:34.132307   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:34.132247   68436 retry.go:31] will retry after 2.999646139s: waiting for machine to come up
	I1105 19:01:37.135339   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:37.135813   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:01:37.135835   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:01:37.135768   68436 retry.go:31] will retry after 3.63515611s: waiting for machine to come up
	I1105 19:01:40.771945   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:40.772553   66674 main.go:141] libmachine: (old-k8s-version-567666) Found IP for machine: 192.168.61.125
	I1105 19:01:40.772584   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has current primary IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:40.772593   66674 main.go:141] libmachine: (old-k8s-version-567666) Reserving static IP address...
	I1105 19:01:40.772973   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"} in network mk-old-k8s-version-567666
	I1105 19:01:40.850899   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Getting to WaitForSSH function...
	I1105 19:01:40.850932   66674 main.go:141] libmachine: (old-k8s-version-567666) Reserved static IP address: 192.168.61.125
	I1105 19:01:40.850941   66674 main.go:141] libmachine: (old-k8s-version-567666) Waiting for SSH to be available...
	I1105 19:01:40.854227   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:40.854578   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666
	I1105 19:01:40.854605   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find defined IP address of network mk-old-k8s-version-567666 interface with MAC address 52:54:00:19:75:85
	I1105 19:01:40.854741   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH client type: external
	I1105 19:01:40.854769   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa (-rw-------)
	I1105 19:01:40.854816   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:01:40.854830   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | About to run SSH command:
	I1105 19:01:40.854856   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | exit 0
	I1105 19:01:40.858825   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | SSH cmd err, output: exit status 255: 
	I1105 19:01:40.858849   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1105 19:01:40.858859   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | command : exit 0
	I1105 19:01:40.858865   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | err     : exit status 255
	I1105 19:01:40.858921   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | output  : 
	I1105 19:01:43.859843   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Getting to WaitForSSH function...
	I1105 19:01:43.862508   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:43.863067   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:43.863097   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:43.863244   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH client type: external
	I1105 19:01:43.863280   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa (-rw-------)
	I1105 19:01:43.863349   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:01:43.863383   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | About to run SSH command:
	I1105 19:01:43.863404   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | exit 0
	I1105 19:01:43.986770   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | SSH cmd err, output: <nil>: 
	I1105 19:01:43.987066   66674 main.go:141] libmachine: (old-k8s-version-567666) KVM machine creation complete!
	I1105 19:01:43.987376   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:01:43.987887   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:01:43.988050   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:01:43.988160   66674 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 19:01:43.988171   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetState
	I1105 19:01:43.989458   66674 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 19:01:43.989478   66674 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 19:01:43.989484   66674 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 19:01:43.989492   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:43.991811   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:43.992163   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:43.992191   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:43.992324   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:43.992497   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:43.992706   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:43.992814   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:43.992979   66674 main.go:141] libmachine: Using SSH client type: native
	I1105 19:01:43.993161   66674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:01:43.993172   66674 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 19:01:44.094154   66674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:01:44.094175   66674 main.go:141] libmachine: Detecting the provisioner...
	I1105 19:01:44.094183   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:44.096811   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.097220   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:44.097249   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.097463   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:44.097658   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:44.097811   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:44.097930   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:44.098096   66674 main.go:141] libmachine: Using SSH client type: native
	I1105 19:01:44.098309   66674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:01:44.098323   66674 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 19:01:44.199386   66674 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 19:01:44.199455   66674 main.go:141] libmachine: found compatible host: buildroot
	I1105 19:01:44.199464   66674 main.go:141] libmachine: Provisioning with buildroot...
	I1105 19:01:44.199471   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:01:44.199719   66674 buildroot.go:166] provisioning hostname "old-k8s-version-567666"
	I1105 19:01:44.199744   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:01:44.199897   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:44.202499   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.202787   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:44.202808   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.202995   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:44.203170   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:44.203325   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:44.203446   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:44.203557   66674 main.go:141] libmachine: Using SSH client type: native
	I1105 19:01:44.203764   66674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:01:44.203782   66674 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-567666 && echo "old-k8s-version-567666" | sudo tee /etc/hostname
	I1105 19:01:44.315727   66674 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-567666
	
	I1105 19:01:44.315749   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:44.318642   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.319014   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:44.319050   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.319294   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:44.319492   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:44.319670   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:44.319811   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:44.319936   66674 main.go:141] libmachine: Using SSH client type: native
	I1105 19:01:44.320132   66674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:01:44.320148   66674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-567666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-567666/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-567666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:01:44.427806   66674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:01:44.427832   66674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:01:44.427856   66674 buildroot.go:174] setting up certificates
	I1105 19:01:44.427868   66674 provision.go:84] configureAuth start
	I1105 19:01:44.427881   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:01:44.428151   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:01:44.430833   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.431237   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:44.431267   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.431403   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:44.433564   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.433889   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:44.433908   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.434065   66674 provision.go:143] copyHostCerts
	I1105 19:01:44.434119   66674 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:01:44.434140   66674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:01:44.434228   66674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:01:44.434360   66674 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:01:44.434372   66674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:01:44.434399   66674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:01:44.434488   66674 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:01:44.434499   66674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:01:44.434533   66674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:01:44.434632   66674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-567666 san=[127.0.0.1 192.168.61.125 localhost minikube old-k8s-version-567666]
	I1105 19:01:44.686042   66674 provision.go:177] copyRemoteCerts
	I1105 19:01:44.686091   66674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:01:44.686112   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:44.688817   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.689121   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:44.689154   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.689307   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:44.689483   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:44.689610   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:44.689738   66674 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:01:44.768799   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:01:44.791467   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1105 19:01:44.813747   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 19:01:44.837945   66674 provision.go:87] duration metric: took 410.061653ms to configureAuth
	I1105 19:01:44.837979   66674 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:01:44.838160   66674 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:01:44.838258   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:44.840982   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.841267   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:44.841300   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:44.841495   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:44.841694   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:44.841833   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:44.841963   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:44.842128   66674 main.go:141] libmachine: Using SSH client type: native
	I1105 19:01:44.842301   66674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:01:44.842322   66674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:01:45.063581   66674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:01:45.063627   66674 main.go:141] libmachine: Checking connection to Docker...
	I1105 19:01:45.063641   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetURL
	I1105 19:01:45.064861   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using libvirt version 6000000
	I1105 19:01:45.067128   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.067514   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:45.067547   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.067686   66674 main.go:141] libmachine: Docker is up and running!
	I1105 19:01:45.067704   66674 main.go:141] libmachine: Reticulating splines...
	I1105 19:01:45.067712   66674 client.go:171] duration metric: took 26.617523124s to LocalClient.Create
	I1105 19:01:45.067734   66674 start.go:167] duration metric: took 26.617581232s to libmachine.API.Create "old-k8s-version-567666"
	I1105 19:01:45.067743   66674 start.go:293] postStartSetup for "old-k8s-version-567666" (driver="kvm2")
	I1105 19:01:45.067753   66674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:01:45.067768   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:01:45.067962   66674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:01:45.067986   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:45.070162   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.070540   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:45.070566   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.070675   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:45.070840   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:45.071013   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:45.071169   66674 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:01:45.153616   66674 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:01:45.157904   66674 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:01:45.157930   66674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:01:45.158006   66674 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:01:45.158117   66674 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:01:45.158229   66674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:01:45.167440   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:01:45.195413   66674 start.go:296] duration metric: took 127.650961ms for postStartSetup
	I1105 19:01:45.195535   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:01:45.196299   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:01:45.199106   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.199469   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:45.199500   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.199704   66674 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:01:45.199954   66674 start.go:128] duration metric: took 26.783725685s to createHost
	I1105 19:01:45.199985   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:45.202440   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.202764   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:45.202796   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.202940   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:45.203133   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:45.203308   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:45.203448   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:45.203613   66674 main.go:141] libmachine: Using SSH client type: native
	I1105 19:01:45.203799   66674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:01:45.203812   66674 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:01:45.307477   66674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833305.286891147
	
	I1105 19:01:45.307499   66674 fix.go:216] guest clock: 1730833305.286891147
	I1105 19:01:45.307508   66674 fix.go:229] Guest: 2024-11-05 19:01:45.286891147 +0000 UTC Remote: 2024-11-05 19:01:45.199970809 +0000 UTC m=+70.371529299 (delta=86.920338ms)
	I1105 19:01:45.307544   66674 fix.go:200] guest clock delta is within tolerance: 86.920338ms
	I1105 19:01:45.307551   66674 start.go:83] releasing machines lock for "old-k8s-version-567666", held for 26.891474093s
	I1105 19:01:45.307582   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:01:45.307851   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:01:45.310553   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.310887   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:45.310927   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.311114   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:01:45.311605   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:01:45.311792   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:01:45.311878   66674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:01:45.311920   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:45.311990   66674 ssh_runner.go:195] Run: cat /version.json
	I1105 19:01:45.312016   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:01:45.314808   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.315184   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.315218   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:45.315239   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.315408   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:45.315577   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:45.315672   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:45.315696   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:45.315737   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:45.315851   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:01:45.315930   66674 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:01:45.316047   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:01:45.316227   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:01:45.316387   66674 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:01:45.433956   66674 ssh_runner.go:195] Run: systemctl --version
	I1105 19:01:45.440182   66674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:01:45.600791   66674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:01:45.606218   66674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:01:45.606281   66674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:01:45.622348   66674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:01:45.622375   66674 start.go:495] detecting cgroup driver to use...
	I1105 19:01:45.622439   66674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:01:45.643500   66674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:01:45.662996   66674 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:01:45.663064   66674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:01:45.682615   66674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:01:45.699309   66674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:01:45.862496   66674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:01:46.020995   66674 docker.go:233] disabling docker service ...
	I1105 19:01:46.021065   66674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:01:46.036833   66674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:01:46.053585   66674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:01:46.210627   66674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:01:46.338361   66674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:01:46.357897   66674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:01:46.381756   66674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1105 19:01:46.381827   66674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:01:46.393052   66674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:01:46.393130   66674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:01:46.404191   66674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:01:46.415674   66674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:01:46.426868   66674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:01:46.439002   66674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:01:46.449021   66674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:01:46.449089   66674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:01:46.463439   66674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:01:46.474228   66674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:01:46.626437   66674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:01:46.741257   66674 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:01:46.741344   66674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:01:46.745968   66674 start.go:563] Will wait 60s for crictl version
	I1105 19:01:46.746036   66674 ssh_runner.go:195] Run: which crictl
	I1105 19:01:46.749993   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:01:46.801037   66674 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:01:46.801131   66674 ssh_runner.go:195] Run: crio --version
	I1105 19:01:46.834430   66674 ssh_runner.go:195] Run: crio --version
	I1105 19:01:46.867636   66674 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1105 19:01:46.869238   66674 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:01:46.874453   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:46.876566   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:01:33 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:01:46.876595   66674 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:01:46.876812   66674 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:01:46.881432   66674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:01:46.895736   66674 kubeadm.go:883] updating cluster {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:01:46.895849   66674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:01:46.895916   66674 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:01:46.930644   66674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:01:46.930719   66674 ssh_runner.go:195] Run: which lz4
	I1105 19:01:46.935872   66674 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:01:46.940817   66674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:01:46.940855   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1105 19:01:48.517841   66674 crio.go:462] duration metric: took 1.582006885s to copy over tarball
	I1105 19:01:48.517935   66674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:01:51.142013   66674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.624048995s)
	I1105 19:01:51.142043   66674 crio.go:469] duration metric: took 2.624165689s to extract the tarball
	I1105 19:01:51.142054   66674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:01:51.186036   66674 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:01:51.231589   66674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:01:51.231609   66674 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:01:51.231654   66674 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:01:51.231703   66674 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:01:51.231723   66674 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:01:51.231727   66674 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:01:51.231710   66674 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:01:51.231777   66674 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:01:51.231879   66674 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1105 19:01:51.231913   66674 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1105 19:01:51.233306   66674 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:01:51.233310   66674 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:01:51.233318   66674 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1105 19:01:51.233344   66674 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:01:51.233354   66674 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1105 19:01:51.233369   66674 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:01:51.233361   66674 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:01:51.233347   66674 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:01:51.456898   66674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:01:51.468766   66674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:01:51.470190   66674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1105 19:01:51.492739   66674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1105 19:01:51.496044   66674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1105 19:01:51.523093   66674 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1105 19:01:51.523133   66674 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:01:51.523181   66674 ssh_runner.go:195] Run: which crictl
	I1105 19:01:51.525085   66674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:01:51.538085   66674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:01:51.627498   66674 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1105 19:01:51.627543   66674 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:01:51.627560   66674 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1105 19:01:51.627588   66674 ssh_runner.go:195] Run: which crictl
	I1105 19:01:51.627600   66674 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1105 19:01:51.627644   66674 ssh_runner.go:195] Run: which crictl
	I1105 19:01:51.627683   66674 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1105 19:01:51.627709   66674 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:01:51.627760   66674 ssh_runner.go:195] Run: which crictl
	I1105 19:01:51.634315   66674 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1105 19:01:51.634348   66674 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1105 19:01:51.634383   66674 ssh_runner.go:195] Run: which crictl
	I1105 19:01:51.634390   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:01:51.649331   66674 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1105 19:01:51.649381   66674 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1105 19:01:51.649412   66674 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:01:51.649427   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:01:51.649439   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:01:51.649448   66674 ssh_runner.go:195] Run: which crictl
	I1105 19:01:51.649485   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:01:51.649533   66674 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:01:51.649589   66674 ssh_runner.go:195] Run: which crictl
	I1105 19:01:51.649660   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:01:51.745169   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:01:51.755298   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:01:51.759773   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:01:51.759799   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:01:51.759812   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:01:51.759864   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:01:51.759871   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:01:51.873450   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:01:51.896096   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:01:51.934242   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:01:51.934268   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:01:51.934351   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:01:51.934401   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:01:51.934488   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:01:51.997894   66674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1105 19:01:52.012462   66674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1105 19:01:52.078542   66674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1105 19:01:52.078603   66674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1105 19:01:52.078639   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:01:52.078703   66674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1105 19:01:52.079094   66674 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:01:52.124180   66674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1105 19:01:52.124205   66674 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1105 19:01:52.436798   66674 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:01:52.584920   66674 cache_images.go:92] duration metric: took 1.353295408s to LoadCachedImages
	W1105 19:01:52.585005   66674 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1105 19:01:52.585022   66674 kubeadm.go:934] updating node { 192.168.61.125 8443 v1.20.0 crio true true} ...
	I1105 19:01:52.585126   66674 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-567666 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:01:52.585208   66674 ssh_runner.go:195] Run: crio config
	I1105 19:01:52.628124   66674 cni.go:84] Creating CNI manager for ""
	I1105 19:01:52.628152   66674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:01:52.628166   66674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:01:52.628186   66674 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-567666 NodeName:old-k8s-version-567666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1105 19:01:52.628309   66674 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-567666"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:01:52.628383   66674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1105 19:01:52.638080   66674 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:01:52.638154   66674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:01:52.646895   66674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1105 19:01:52.662340   66674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:01:52.680855   66674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1105 19:01:52.698920   66674 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I1105 19:01:52.702336   66674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:01:52.714020   66674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:01:52.837722   66674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:01:52.857023   66674 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666 for IP: 192.168.61.125
	I1105 19:01:52.857047   66674 certs.go:194] generating shared ca certs ...
	I1105 19:01:52.857066   66674 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:01:52.857282   66674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:01:52.857345   66674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:01:52.857361   66674 certs.go:256] generating profile certs ...
	I1105 19:01:52.857430   66674 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key
	I1105 19:01:52.857451   66674 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.crt with IP's: []
	I1105 19:01:53.245348   66674 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.crt ...
	I1105 19:01:53.245379   66674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.crt: {Name:mke458f4392c7cd8bc544070f082582a9574846f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:01:53.245546   66674 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key ...
	I1105 19:01:53.245565   66674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key: {Name:mk53ce2cac7a180256af483cd2fd36e6f2b06e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:01:53.245642   66674 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8
	I1105 19:01:53.245657   66674 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt.535024f8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.125]
	I1105 19:01:53.521235   66674 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt.535024f8 ...
	I1105 19:01:53.521267   66674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt.535024f8: {Name:mkc25ad6f0caeb6f377b32271c31c3e7fed94171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:01:53.521431   66674 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8 ...
	I1105 19:01:53.521443   66674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8: {Name:mk70e984cb2f33fd56492cd933d6b2aba81d026c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:01:53.521511   66674 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt.535024f8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt
	I1105 19:01:53.521578   66674 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key
	I1105 19:01:53.521629   66674 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key
	I1105 19:01:53.521649   66674 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt with IP's: []
	I1105 19:01:53.688019   66674 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt ...
	I1105 19:01:53.688047   66674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt: {Name:mkdb9c855131cf2849fba7f381e85efc4937e594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:01:53.688208   66674 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key ...
	I1105 19:01:53.688219   66674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key: {Name:mk778f982413baefd69594c842c94c1d2e3e851c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:01:53.688378   66674 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:01:53.688415   66674 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:01:53.688423   66674 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:01:53.688452   66674 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:01:53.688485   66674 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:01:53.688508   66674 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:01:53.688548   66674 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:01:53.689165   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:01:53.724927   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:01:53.755674   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:01:53.787555   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:01:53.816679   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 19:01:53.841655   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:01:53.865707   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:01:53.890238   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:01:53.913576   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:01:53.936100   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:01:53.963210   66674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:01:53.987767   66674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:01:54.006120   66674 ssh_runner.go:195] Run: openssl version
	I1105 19:01:54.011980   66674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:01:54.023783   66674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:01:54.028458   66674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:01:54.028524   66674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:01:54.035943   66674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:01:54.048099   66674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:01:54.058673   66674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:01:54.062822   66674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:01:54.062865   66674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:01:54.068086   66674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:01:54.079457   66674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:01:54.089848   66674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:01:54.094037   66674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:01:54.094093   66674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:01:54.101527   66674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:01:54.114494   66674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:01:54.118414   66674 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 19:01:54.118466   66674 kubeadm.go:392] StartCluster: {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:01:54.118553   66674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:01:54.118605   66674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:01:54.168949   66674 cri.go:89] found id: ""
	I1105 19:01:54.169028   66674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:01:54.178829   66674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:01:54.188881   66674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:01:54.197948   66674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:01:54.197971   66674 kubeadm.go:157] found existing configuration files:
	
	I1105 19:01:54.198035   66674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:01:54.208074   66674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:01:54.208141   66674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:01:54.217368   66674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:01:54.226121   66674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:01:54.226197   66674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:01:54.236084   66674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:01:54.245055   66674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:01:54.245119   66674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:01:54.254329   66674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:01:54.266092   66674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:01:54.266210   66674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:01:54.279349   66674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:01:54.395355   66674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:01:54.395429   66674 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:01:54.535584   66674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:01:54.535768   66674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:01:54.535914   66674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:01:54.766853   66674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:01:54.963882   66674 out.go:235]   - Generating certificates and keys ...
	I1105 19:01:54.964021   66674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:01:54.964106   66674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:01:54.964246   66674 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 19:01:55.149828   66674 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 19:01:55.421247   66674 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 19:01:55.828051   66674 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 19:01:55.952915   66674 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 19:01:55.953345   66674 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-567666] and IPs [192.168.61.125 127.0.0.1 ::1]
	I1105 19:01:56.123142   66674 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 19:01:56.123338   66674 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-567666] and IPs [192.168.61.125 127.0.0.1 ::1]
	I1105 19:01:56.251467   66674 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 19:01:56.365124   66674 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 19:01:56.916503   66674 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 19:01:56.916832   66674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:01:57.088472   66674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:01:57.413370   66674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:01:57.718380   66674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:01:57.821374   66674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:01:57.845356   66674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:01:57.846854   66674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:01:57.846932   66674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:01:57.999062   66674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:01:58.000983   66674 out.go:235]   - Booting up control plane ...
	I1105 19:01:58.001147   66674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:01:58.013126   66674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:01:58.014787   66674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:01:58.015907   66674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:01:58.021912   66674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:02:38.017914   66674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:02:38.018048   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:02:38.018308   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:02:43.018347   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:02:43.018609   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:02:53.017935   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:02:53.018210   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:03:13.017745   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:03:13.018004   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:03:53.019425   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:03:53.019716   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:03:53.019737   66674 kubeadm.go:310] 
	I1105 19:03:53.019800   66674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:03:53.019850   66674 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:03:53.019878   66674 kubeadm.go:310] 
	I1105 19:03:53.019924   66674 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:03:53.019974   66674 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:03:53.020115   66674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:03:53.020135   66674 kubeadm.go:310] 
	I1105 19:03:53.020277   66674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:03:53.020335   66674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:03:53.020376   66674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:03:53.020386   66674 kubeadm.go:310] 
	I1105 19:03:53.020545   66674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:03:53.020670   66674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:03:53.020686   66674 kubeadm.go:310] 
	I1105 19:03:53.020837   66674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:03:53.020953   66674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:03:53.021040   66674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:03:53.021121   66674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:03:53.021158   66674 kubeadm.go:310] 
	I1105 19:03:53.021308   66674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:03:53.021403   66674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:03:53.021503   66674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1105 19:03:53.021579   66674 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-567666] and IPs [192.168.61.125 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-567666] and IPs [192.168.61.125 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-567666] and IPs [192.168.61.125 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-567666] and IPs [192.168.61.125 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1105 19:03:53.021633   66674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:03:54.298057   66674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.27639275s)
	I1105 19:03:54.298134   66674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:03:54.311680   66674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:03:54.321949   66674 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:03:54.321969   66674 kubeadm.go:157] found existing configuration files:
	
	I1105 19:03:54.322010   66674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:03:54.330779   66674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:03:54.330832   66674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:03:54.341426   66674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:03:54.351530   66674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:03:54.351580   66674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:03:54.360489   66674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:03:54.369131   66674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:03:54.369184   66674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:03:54.378247   66674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:03:54.386572   66674 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:03:54.386618   66674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:03:54.395712   66674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:03:54.588308   66674 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:05:50.741584   66674 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:05:50.741714   66674 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1105 19:05:50.743264   66674 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:05:50.743326   66674 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:05:50.743425   66674 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:05:50.743507   66674 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:05:50.743593   66674 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:05:50.743660   66674 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:05:50.745456   66674 out.go:235]   - Generating certificates and keys ...
	I1105 19:05:50.745536   66674 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:05:50.745631   66674 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:05:50.745749   66674 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:05:50.745833   66674 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:05:50.745910   66674 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:05:50.745956   66674 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:05:50.746009   66674 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:05:50.746103   66674 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:05:50.746219   66674 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:05:50.746298   66674 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:05:50.746337   66674 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:05:50.746401   66674 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:05:50.746488   66674 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:05:50.746540   66674 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:05:50.746600   66674 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:05:50.746709   66674 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:05:50.746883   66674 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:05:50.747017   66674 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:05:50.747073   66674 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:05:50.747155   66674 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:05:50.748505   66674 out.go:235]   - Booting up control plane ...
	I1105 19:05:50.748581   66674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:05:50.748675   66674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:05:50.748760   66674 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:05:50.748829   66674 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:05:50.748976   66674 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:05:50.749026   66674 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:05:50.749086   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:05:50.749244   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:05:50.749305   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:05:50.749462   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:05:50.749522   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:05:50.749693   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:05:50.749757   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:05:50.749938   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:05:50.750011   66674 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:05:50.750179   66674 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:05:50.750195   66674 kubeadm.go:310] 
	I1105 19:05:50.750243   66674 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:05:50.750282   66674 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:05:50.750288   66674 kubeadm.go:310] 
	I1105 19:05:50.750317   66674 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:05:50.750355   66674 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:05:50.750478   66674 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:05:50.750485   66674 kubeadm.go:310] 
	I1105 19:05:50.750626   66674 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:05:50.750672   66674 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:05:50.750725   66674 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:05:50.750731   66674 kubeadm.go:310] 
	I1105 19:05:50.750853   66674 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:05:50.750955   66674 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:05:50.750984   66674 kubeadm.go:310] 
	I1105 19:05:50.751132   66674 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:05:50.751249   66674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:05:50.751323   66674 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:05:50.751393   66674 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:05:50.751449   66674 kubeadm.go:394] duration metric: took 3m56.63298584s to StartCluster
	I1105 19:05:50.751462   66674 kubeadm.go:310] 
	I1105 19:05:50.751484   66674 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:05:50.751529   66674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:05:50.792126   66674 cri.go:89] found id: ""
	I1105 19:05:50.792159   66674 logs.go:282] 0 containers: []
	W1105 19:05:50.792168   66674 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:05:50.792173   66674 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:05:50.792236   66674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:05:50.824109   66674 cri.go:89] found id: ""
	I1105 19:05:50.824136   66674 logs.go:282] 0 containers: []
	W1105 19:05:50.824144   66674 logs.go:284] No container was found matching "etcd"
	I1105 19:05:50.824149   66674 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:05:50.824197   66674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:05:50.861847   66674 cri.go:89] found id: ""
	I1105 19:05:50.861876   66674 logs.go:282] 0 containers: []
	W1105 19:05:50.861885   66674 logs.go:284] No container was found matching "coredns"
	I1105 19:05:50.861892   66674 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:05:50.861941   66674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:05:50.902723   66674 cri.go:89] found id: ""
	I1105 19:05:50.902753   66674 logs.go:282] 0 containers: []
	W1105 19:05:50.902764   66674 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:05:50.902772   66674 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:05:50.902837   66674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:05:50.946527   66674 cri.go:89] found id: ""
	I1105 19:05:50.946558   66674 logs.go:282] 0 containers: []
	W1105 19:05:50.946570   66674 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:05:50.946577   66674 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:05:50.946640   66674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:05:50.985817   66674 cri.go:89] found id: ""
	I1105 19:05:50.985845   66674 logs.go:282] 0 containers: []
	W1105 19:05:50.985855   66674 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:05:50.985862   66674 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:05:50.985921   66674 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:05:51.026313   66674 cri.go:89] found id: ""
	I1105 19:05:51.026336   66674 logs.go:282] 0 containers: []
	W1105 19:05:51.026343   66674 logs.go:284] No container was found matching "kindnet"
	I1105 19:05:51.026351   66674 logs.go:123] Gathering logs for kubelet ...
	I1105 19:05:51.026361   66674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:05:51.074029   66674 logs.go:123] Gathering logs for dmesg ...
	I1105 19:05:51.074063   66674 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:05:51.087192   66674 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:05:51.087217   66674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:05:51.199174   66674 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:05:51.199199   66674 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:05:51.199215   66674 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:05:51.299226   66674 logs.go:123] Gathering logs for container status ...
	I1105 19:05:51.299264   66674 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1105 19:05:51.334884   66674 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1105 19:05:51.334935   66674 out.go:270] * 
	* 
	W1105 19:05:51.334999   66674 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:05:51.335017   66674 out.go:270] * 
	* 
	W1105 19:05:51.335845   66674 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:05:51.338914   66674 out.go:201] 
	W1105 19:05:51.340010   66674 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:05:51.340064   66674 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1105 19:05:51.340099   66674 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1105 19:05:51.341302   66674 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-567666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 6 (216.583158ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:05:51.606196   73601 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-567666" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (316.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-459223 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-459223 --alsologtostderr -v=3: exit status 82 (2m0.50785789s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-459223"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 19:03:14.173699   72491 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:03:14.173796   72491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:03:14.173800   72491 out.go:358] Setting ErrFile to fd 2...
	I1105 19:03:14.173804   72491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:03:14.173975   72491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:03:14.174184   72491 out.go:352] Setting JSON to false
	I1105 19:03:14.174254   72491 mustload.go:65] Loading cluster: no-preload-459223
	I1105 19:03:14.174589   72491 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:03:14.174661   72491 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/config.json ...
	I1105 19:03:14.174818   72491 mustload.go:65] Loading cluster: no-preload-459223
	I1105 19:03:14.174916   72491 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:03:14.174947   72491 stop.go:39] StopHost: no-preload-459223
	I1105 19:03:14.175414   72491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:03:14.175452   72491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:03:14.190021   72491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I1105 19:03:14.190505   72491 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:03:14.191048   72491 main.go:141] libmachine: Using API Version  1
	I1105 19:03:14.191074   72491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:03:14.191380   72491 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:03:14.193662   72491 out.go:177] * Stopping node "no-preload-459223"  ...
	I1105 19:03:14.194881   72491 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1105 19:03:14.194921   72491 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:03:14.195138   72491 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1105 19:03:14.195166   72491 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:03:14.198093   72491 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:03:14.198459   72491 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:02:00 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:03:14.198492   72491 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:03:14.198622   72491 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:03:14.198793   72491 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:03:14.198946   72491 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:03:14.199102   72491 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:03:14.303851   72491 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1105 19:03:14.369694   72491 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1105 19:03:14.430043   72491 main.go:141] libmachine: Stopping "no-preload-459223"...
	I1105 19:03:14.430076   72491 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:03:14.431793   72491 main.go:141] libmachine: (no-preload-459223) Calling .Stop
	I1105 19:03:14.435766   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 0/120
	I1105 19:03:15.437301   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 1/120
	I1105 19:03:16.438504   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 2/120
	I1105 19:03:17.439946   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 3/120
	I1105 19:03:18.442448   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 4/120
	I1105 19:03:19.444418   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 5/120
	I1105 19:03:20.448145   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 6/120
	I1105 19:03:21.449604   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 7/120
	I1105 19:03:22.451204   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 8/120
	I1105 19:03:23.452542   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 9/120
	I1105 19:03:24.454775   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 10/120
	I1105 19:03:25.456779   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 11/120
	I1105 19:03:26.458459   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 12/120
	I1105 19:03:27.460036   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 13/120
	I1105 19:03:28.461430   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 14/120
	I1105 19:03:29.463321   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 15/120
	I1105 19:03:30.465480   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 16/120
	I1105 19:03:31.466926   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 17/120
	I1105 19:03:32.468524   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 18/120
	I1105 19:03:33.470055   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 19/120
	I1105 19:03:34.471967   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 20/120
	I1105 19:03:35.473447   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 21/120
	I1105 19:03:36.474608   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 22/120
	I1105 19:03:37.476474   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 23/120
	I1105 19:03:38.478045   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 24/120
	I1105 19:03:39.479897   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 25/120
	I1105 19:03:40.481281   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 26/120
	I1105 19:03:41.482547   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 27/120
	I1105 19:03:42.483869   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 28/120
	I1105 19:03:43.485341   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 29/120
	I1105 19:03:44.487519   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 30/120
	I1105 19:03:45.489067   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 31/120
	I1105 19:03:46.491063   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 32/120
	I1105 19:03:47.492337   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 33/120
	I1105 19:03:48.494116   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 34/120
	I1105 19:03:49.495993   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 35/120
	I1105 19:03:50.497426   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 36/120
	I1105 19:03:51.498916   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 37/120
	I1105 19:03:52.500317   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 38/120
	I1105 19:03:53.501530   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 39/120
	I1105 19:03:54.503504   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 40/120
	I1105 19:03:55.504845   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 41/120
	I1105 19:03:56.506309   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 42/120
	I1105 19:03:57.507981   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 43/120
	I1105 19:03:58.509443   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 44/120
	I1105 19:03:59.511368   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 45/120
	I1105 19:04:00.513025   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 46/120
	I1105 19:04:01.514601   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 47/120
	I1105 19:04:02.516072   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 48/120
	I1105 19:04:03.517505   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 49/120
	I1105 19:04:04.519714   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 50/120
	I1105 19:04:05.521755   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 51/120
	I1105 19:04:06.523118   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 52/120
	I1105 19:04:07.524348   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 53/120
	I1105 19:04:08.525859   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 54/120
	I1105 19:04:09.528163   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 55/120
	I1105 19:04:10.530404   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 56/120
	I1105 19:04:11.531670   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 57/120
	I1105 19:04:12.533022   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 58/120
	I1105 19:04:13.534294   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 59/120
	I1105 19:04:14.536210   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 60/120
	I1105 19:04:15.537504   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 61/120
	I1105 19:04:16.539365   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 62/120
	I1105 19:04:17.540831   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 63/120
	I1105 19:04:18.542414   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 64/120
	I1105 19:04:19.544545   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 65/120
	I1105 19:04:20.545921   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 66/120
	I1105 19:04:21.547573   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 67/120
	I1105 19:04:22.548932   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 68/120
	I1105 19:04:23.550165   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 69/120
	I1105 19:04:24.552309   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 70/120
	I1105 19:04:25.553557   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 71/120
	I1105 19:04:26.554961   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 72/120
	I1105 19:04:27.556327   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 73/120
	I1105 19:04:28.557608   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 74/120
	I1105 19:04:29.559562   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 75/120
	I1105 19:04:30.561433   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 76/120
	I1105 19:04:31.562692   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 77/120
	I1105 19:04:32.563970   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 78/120
	I1105 19:04:33.565490   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 79/120
	I1105 19:04:34.567813   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 80/120
	I1105 19:04:35.569164   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 81/120
	I1105 19:04:36.570562   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 82/120
	I1105 19:04:37.571929   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 83/120
	I1105 19:04:38.573172   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 84/120
	I1105 19:04:39.575140   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 85/120
	I1105 19:04:40.576443   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 86/120
	I1105 19:04:41.577941   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 87/120
	I1105 19:04:42.579426   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 88/120
	I1105 19:04:43.581327   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 89/120
	I1105 19:04:44.583438   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 90/120
	I1105 19:04:45.584809   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 91/120
	I1105 19:04:46.585992   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 92/120
	I1105 19:04:47.587361   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 93/120
	I1105 19:04:48.588557   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 94/120
	I1105 19:04:49.590599   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 95/120
	I1105 19:04:50.591923   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 96/120
	I1105 19:04:51.593242   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 97/120
	I1105 19:04:52.594539   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 98/120
	I1105 19:04:53.595725   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 99/120
	I1105 19:04:54.597800   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 100/120
	I1105 19:04:55.599070   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 101/120
	I1105 19:04:56.600278   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 102/120
	I1105 19:04:57.601560   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 103/120
	I1105 19:04:58.602777   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 104/120
	I1105 19:04:59.604614   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 105/120
	I1105 19:05:00.605957   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 106/120
	I1105 19:05:01.608135   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 107/120
	I1105 19:05:02.609575   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 108/120
	I1105 19:05:03.611045   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 109/120
	I1105 19:05:04.613353   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 110/120
	I1105 19:05:05.614749   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 111/120
	I1105 19:05:06.616090   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 112/120
	I1105 19:05:07.617679   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 113/120
	I1105 19:05:08.619394   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 114/120
	I1105 19:05:09.621573   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 115/120
	I1105 19:05:10.623058   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 116/120
	I1105 19:05:11.624330   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 117/120
	I1105 19:05:12.625806   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 118/120
	I1105 19:05:13.627195   72491 main.go:141] libmachine: (no-preload-459223) Waiting for machine to stop 119/120
	I1105 19:05:14.628110   72491 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1105 19:05:14.628185   72491 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1105 19:05:14.629936   72491 out.go:201] 
	W1105 19:05:14.631268   72491 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1105 19:05:14.631287   72491 out.go:270] * 
	* 
	W1105 19:05:14.633923   72491 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:05:14.635097   72491 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-459223 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-459223 -n no-preload-459223
E1105 19:05:15.948958   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-459223 -n no-preload-459223: exit status 3 (18.630822857s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:05:33.267329   73211 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host
	E1105 19:05:33.267348   73211 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-459223" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-271881 --alsologtostderr -v=3
E1105 19:03:21.503170   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:35.987898   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:41.984793   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:06.921801   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:16.949262   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-271881 --alsologtostderr -v=3: exit status 82 (2m0.50488428s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-271881"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 19:03:21.456297   72673 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:03:21.456433   72673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:03:21.456445   72673 out.go:358] Setting ErrFile to fd 2...
	I1105 19:03:21.456452   72673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:03:21.456646   72673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:03:21.456859   72673 out.go:352] Setting JSON to false
	I1105 19:03:21.456933   72673 mustload.go:65] Loading cluster: embed-certs-271881
	I1105 19:03:21.457322   72673 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:03:21.457422   72673 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/config.json ...
	I1105 19:03:21.457610   72673 mustload.go:65] Loading cluster: embed-certs-271881
	I1105 19:03:21.457753   72673 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:03:21.457779   72673 stop.go:39] StopHost: embed-certs-271881
	I1105 19:03:21.458187   72673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:03:21.458238   72673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:03:21.472752   72673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I1105 19:03:21.473243   72673 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:03:21.473915   72673 main.go:141] libmachine: Using API Version  1
	I1105 19:03:21.473942   72673 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:03:21.474243   72673 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:03:21.476734   72673 out.go:177] * Stopping node "embed-certs-271881"  ...
	I1105 19:03:21.477935   72673 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1105 19:03:21.477994   72673 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:03:21.478225   72673 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1105 19:03:21.478257   72673 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:03:21.481224   72673 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:03:21.481581   72673 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:02:28 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:03:21.481614   72673 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:03:21.481796   72673 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:03:21.481975   72673 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:03:21.482157   72673 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:03:21.482294   72673 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:03:21.598754   72673 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1105 19:03:21.654688   72673 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1105 19:03:21.712382   72673 main.go:141] libmachine: Stopping "embed-certs-271881"...
	I1105 19:03:21.712429   72673 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:03:21.714316   72673 main.go:141] libmachine: (embed-certs-271881) Calling .Stop
	I1105 19:03:21.718346   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 0/120
	I1105 19:03:22.720147   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 1/120
	I1105 19:03:23.721433   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 2/120
	I1105 19:03:24.722803   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 3/120
	I1105 19:03:25.724430   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 4/120
	I1105 19:03:26.726670   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 5/120
	I1105 19:03:27.728153   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 6/120
	I1105 19:03:28.729533   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 7/120
	I1105 19:03:29.731073   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 8/120
	I1105 19:03:30.732211   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 9/120
	I1105 19:03:31.733782   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 10/120
	I1105 19:03:32.735260   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 11/120
	I1105 19:03:33.736586   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 12/120
	I1105 19:03:34.737875   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 13/120
	I1105 19:03:35.739328   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 14/120
	I1105 19:03:36.741134   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 15/120
	I1105 19:03:37.743898   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 16/120
	I1105 19:03:38.745269   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 17/120
	I1105 19:03:39.746403   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 18/120
	I1105 19:03:40.747918   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 19/120
	I1105 19:03:41.750106   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 20/120
	I1105 19:03:42.751360   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 21/120
	I1105 19:03:43.752867   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 22/120
	I1105 19:03:44.754033   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 23/120
	I1105 19:03:45.755287   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 24/120
	I1105 19:03:46.757483   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 25/120
	I1105 19:03:47.758735   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 26/120
	I1105 19:03:48.760075   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 27/120
	I1105 19:03:49.761368   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 28/120
	I1105 19:03:50.763029   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 29/120
	I1105 19:03:51.765268   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 30/120
	I1105 19:03:52.766612   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 31/120
	I1105 19:03:53.768135   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 32/120
	I1105 19:03:54.769629   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 33/120
	I1105 19:03:55.771223   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 34/120
	I1105 19:03:56.772730   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 35/120
	I1105 19:03:57.773913   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 36/120
	I1105 19:03:58.775217   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 37/120
	I1105 19:03:59.776454   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 38/120
	I1105 19:04:00.777795   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 39/120
	I1105 19:04:01.779947   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 40/120
	I1105 19:04:02.781484   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 41/120
	I1105 19:04:03.782878   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 42/120
	I1105 19:04:04.784346   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 43/120
	I1105 19:04:05.785611   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 44/120
	I1105 19:04:06.787247   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 45/120
	I1105 19:04:07.788487   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 46/120
	I1105 19:04:08.790082   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 47/120
	I1105 19:04:09.791556   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 48/120
	I1105 19:04:10.793067   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 49/120
	I1105 19:04:11.795120   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 50/120
	I1105 19:04:12.796489   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 51/120
	I1105 19:04:13.797693   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 52/120
	I1105 19:04:14.799087   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 53/120
	I1105 19:04:15.800575   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 54/120
	I1105 19:04:16.802693   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 55/120
	I1105 19:04:17.804012   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 56/120
	I1105 19:04:18.805430   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 57/120
	I1105 19:04:19.806673   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 58/120
	I1105 19:04:20.808007   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 59/120
	I1105 19:04:21.810070   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 60/120
	I1105 19:04:22.811531   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 61/120
	I1105 19:04:23.813504   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 62/120
	I1105 19:04:24.814867   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 63/120
	I1105 19:04:25.816342   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 64/120
	I1105 19:04:26.818220   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 65/120
	I1105 19:04:27.819626   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 66/120
	I1105 19:04:28.821379   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 67/120
	I1105 19:04:29.822625   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 68/120
	I1105 19:04:30.823870   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 69/120
	I1105 19:04:31.826148   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 70/120
	I1105 19:04:32.827453   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 71/120
	I1105 19:04:33.828875   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 72/120
	I1105 19:04:34.830196   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 73/120
	I1105 19:04:35.831740   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 74/120
	I1105 19:04:36.833762   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 75/120
	I1105 19:04:37.835112   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 76/120
	I1105 19:04:38.836602   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 77/120
	I1105 19:04:39.838015   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 78/120
	I1105 19:04:40.839420   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 79/120
	I1105 19:04:41.841614   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 80/120
	I1105 19:04:42.843164   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 81/120
	I1105 19:04:43.845426   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 82/120
	I1105 19:04:44.846897   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 83/120
	I1105 19:04:45.848297   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 84/120
	I1105 19:04:46.850258   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 85/120
	I1105 19:04:47.851615   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 86/120
	I1105 19:04:48.853061   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 87/120
	I1105 19:04:49.854501   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 88/120
	I1105 19:04:50.855863   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 89/120
	I1105 19:04:51.858152   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 90/120
	I1105 19:04:52.859420   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 91/120
	I1105 19:04:53.860944   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 92/120
	I1105 19:04:54.862315   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 93/120
	I1105 19:04:55.863734   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 94/120
	I1105 19:04:56.865869   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 95/120
	I1105 19:04:57.867267   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 96/120
	I1105 19:04:58.868532   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 97/120
	I1105 19:04:59.869855   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 98/120
	I1105 19:05:00.871511   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 99/120
	I1105 19:05:01.873841   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 100/120
	I1105 19:05:02.875493   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 101/120
	I1105 19:05:03.877159   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 102/120
	I1105 19:05:04.878712   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 103/120
	I1105 19:05:05.880341   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 104/120
	I1105 19:05:06.882710   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 105/120
	I1105 19:05:07.884302   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 106/120
	I1105 19:05:08.885843   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 107/120
	I1105 19:05:09.887249   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 108/120
	I1105 19:05:10.889047   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 109/120
	I1105 19:05:11.890540   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 110/120
	I1105 19:05:12.892014   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 111/120
	I1105 19:05:13.893597   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 112/120
	I1105 19:05:14.895019   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 113/120
	I1105 19:05:15.896573   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 114/120
	I1105 19:05:16.898781   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 115/120
	I1105 19:05:17.900291   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 116/120
	I1105 19:05:18.901751   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 117/120
	I1105 19:05:19.903119   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 118/120
	I1105 19:05:20.904643   72673 main.go:141] libmachine: (embed-certs-271881) Waiting for machine to stop 119/120
	I1105 19:05:21.905146   72673 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1105 19:05:21.905212   72673 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1105 19:05:21.907098   72673 out.go:201] 
	W1105 19:05:21.908400   72673 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1105 19:05:21.908419   72673 out.go:270] * 
	* 
	W1105 19:05:21.910871   72673 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:05:21.912243   72673 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-271881 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271881 -n embed-certs-271881
E1105 19:05:26.190396   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:31.241866   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271881 -n embed-certs-271881: exit status 3 (18.521637825s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:05:40.435350   73273 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E1105 19:05:40.435372   73273 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-271881" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-608095 --alsologtostderr -v=3
E1105 19:04:50.266312   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:50.272733   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:50.284166   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:50.305584   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:50.347067   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:50.428520   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:50.590310   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:50.912146   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:51.553679   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:52.834998   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:04:55.396309   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:00.517868   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:05.695200   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:05.701625   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:05.713111   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:05.734578   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:05.776076   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:05.857693   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:06.019795   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:06.341550   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:06.983392   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:08.264927   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:10.759483   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:10.827061   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-608095 --alsologtostderr -v=3: exit status 82 (2m0.499862344s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-608095"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 19:04:33.222688   73052 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:04:33.223039   73052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:04:33.223058   73052 out.go:358] Setting ErrFile to fd 2...
	I1105 19:04:33.223065   73052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:04:33.223379   73052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:04:33.223687   73052 out.go:352] Setting JSON to false
	I1105 19:04:33.223796   73052 mustload.go:65] Loading cluster: default-k8s-diff-port-608095
	I1105 19:04:33.224322   73052 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:04:33.224443   73052 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/config.json ...
	I1105 19:04:33.224680   73052 mustload.go:65] Loading cluster: default-k8s-diff-port-608095
	I1105 19:04:33.224831   73052 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:04:33.224866   73052 stop.go:39] StopHost: default-k8s-diff-port-608095
	I1105 19:04:33.225447   73052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:04:33.225501   73052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:04:33.239665   73052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45561
	I1105 19:04:33.240134   73052 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:04:33.240784   73052 main.go:141] libmachine: Using API Version  1
	I1105 19:04:33.240814   73052 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:04:33.241200   73052 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:04:33.243426   73052 out.go:177] * Stopping node "default-k8s-diff-port-608095"  ...
	I1105 19:04:33.244806   73052 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1105 19:04:33.244839   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:04:33.245039   73052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1105 19:04:33.245060   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:04:33.247957   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:04:33.248334   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:03:06 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:04:33.248355   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:04:33.248511   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:04:33.248667   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:04:33.248809   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:04:33.248948   73052 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:04:33.355231   73052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1105 19:04:33.417902   73052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1105 19:04:33.473150   73052 main.go:141] libmachine: Stopping "default-k8s-diff-port-608095"...
	I1105 19:04:33.473190   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:04:33.474810   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Stop
	I1105 19:04:33.478002   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 0/120
	I1105 19:04:34.479397   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 1/120
	I1105 19:04:35.480756   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 2/120
	I1105 19:04:36.481955   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 3/120
	I1105 19:04:37.483261   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 4/120
	I1105 19:04:38.485196   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 5/120
	I1105 19:04:39.486626   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 6/120
	I1105 19:04:40.488006   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 7/120
	I1105 19:04:41.489461   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 8/120
	I1105 19:04:42.490753   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 9/120
	I1105 19:04:43.492110   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 10/120
	I1105 19:04:44.493453   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 11/120
	I1105 19:04:45.495272   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 12/120
	I1105 19:04:46.496565   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 13/120
	I1105 19:04:47.497980   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 14/120
	I1105 19:04:48.500032   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 15/120
	I1105 19:04:49.501555   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 16/120
	I1105 19:04:50.502946   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 17/120
	I1105 19:04:51.504296   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 18/120
	I1105 19:04:52.505809   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 19/120
	I1105 19:04:53.507960   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 20/120
	I1105 19:04:54.509460   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 21/120
	I1105 19:04:55.510884   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 22/120
	I1105 19:04:56.512165   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 23/120
	I1105 19:04:57.513631   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 24/120
	I1105 19:04:58.515572   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 25/120
	I1105 19:04:59.517577   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 26/120
	I1105 19:05:00.519045   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 27/120
	I1105 19:05:01.520669   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 28/120
	I1105 19:05:02.522280   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 29/120
	I1105 19:05:03.523744   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 30/120
	I1105 19:05:04.525289   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 31/120
	I1105 19:05:05.526890   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 32/120
	I1105 19:05:06.528338   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 33/120
	I1105 19:05:07.530039   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 34/120
	I1105 19:05:08.532265   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 35/120
	I1105 19:05:09.533874   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 36/120
	I1105 19:05:10.535394   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 37/120
	I1105 19:05:11.536816   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 38/120
	I1105 19:05:12.538211   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 39/120
	I1105 19:05:13.539754   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 40/120
	I1105 19:05:14.541411   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 41/120
	I1105 19:05:15.542848   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 42/120
	I1105 19:05:16.544404   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 43/120
	I1105 19:05:17.545751   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 44/120
	I1105 19:05:18.547752   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 45/120
	I1105 19:05:19.549684   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 46/120
	I1105 19:05:20.551077   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 47/120
	I1105 19:05:21.552520   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 48/120
	I1105 19:05:22.554350   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 49/120
	I1105 19:05:23.556596   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 50/120
	I1105 19:05:24.558133   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 51/120
	I1105 19:05:25.559358   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 52/120
	I1105 19:05:26.560822   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 53/120
	I1105 19:05:27.562343   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 54/120
	I1105 19:05:28.564532   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 55/120
	I1105 19:05:29.565941   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 56/120
	I1105 19:05:30.567484   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 57/120
	I1105 19:05:31.568976   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 58/120
	I1105 19:05:32.570343   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 59/120
	I1105 19:05:33.572674   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 60/120
	I1105 19:05:34.574004   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 61/120
	I1105 19:05:35.575431   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 62/120
	I1105 19:05:36.576731   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 63/120
	I1105 19:05:37.578398   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 64/120
	I1105 19:05:38.580566   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 65/120
	I1105 19:05:39.582052   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 66/120
	I1105 19:05:40.583351   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 67/120
	I1105 19:05:41.584789   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 68/120
	I1105 19:05:42.586063   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 69/120
	I1105 19:05:43.588101   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 70/120
	I1105 19:05:44.589425   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 71/120
	I1105 19:05:45.590771   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 72/120
	I1105 19:05:46.592129   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 73/120
	I1105 19:05:47.593511   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 74/120
	I1105 19:05:48.595731   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 75/120
	I1105 19:05:49.596983   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 76/120
	I1105 19:05:50.598287   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 77/120
	I1105 19:05:51.599771   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 78/120
	I1105 19:05:52.601016   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 79/120
	I1105 19:05:53.603126   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 80/120
	I1105 19:05:54.604472   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 81/120
	I1105 19:05:55.605801   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 82/120
	I1105 19:05:56.607324   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 83/120
	I1105 19:05:57.608636   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 84/120
	I1105 19:05:58.610047   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 85/120
	I1105 19:05:59.611526   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 86/120
	I1105 19:06:00.612903   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 87/120
	I1105 19:06:01.614234   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 88/120
	I1105 19:06:02.615552   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 89/120
	I1105 19:06:03.617885   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 90/120
	I1105 19:06:04.619685   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 91/120
	I1105 19:06:05.621236   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 92/120
	I1105 19:06:06.622606   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 93/120
	I1105 19:06:07.623921   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 94/120
	I1105 19:06:08.626230   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 95/120
	I1105 19:06:09.627608   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 96/120
	I1105 19:06:10.628895   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 97/120
	I1105 19:06:11.630372   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 98/120
	I1105 19:06:12.632000   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 99/120
	I1105 19:06:13.634053   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 100/120
	I1105 19:06:14.635458   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 101/120
	I1105 19:06:15.637168   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 102/120
	I1105 19:06:16.639029   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 103/120
	I1105 19:06:17.640516   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 104/120
	I1105 19:06:18.643012   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 105/120
	I1105 19:06:19.644630   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 106/120
	I1105 19:06:20.646337   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 107/120
	I1105 19:06:21.647774   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 108/120
	I1105 19:06:22.649320   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 109/120
	I1105 19:06:23.651741   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 110/120
	I1105 19:06:24.653367   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 111/120
	I1105 19:06:25.654842   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 112/120
	I1105 19:06:26.656354   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 113/120
	I1105 19:06:27.657730   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 114/120
	I1105 19:06:28.659855   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 115/120
	I1105 19:06:29.661410   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 116/120
	I1105 19:06:30.662834   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 117/120
	I1105 19:06:31.664311   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 118/120
	I1105 19:06:32.665776   73052 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for machine to stop 119/120
	I1105 19:06:33.666271   73052 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1105 19:06:33.666342   73052 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1105 19:06:33.668133   73052 out.go:201] 
	W1105 19:06:33.669466   73052 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1105 19:06:33.669484   73052 out.go:270] * 
	* 
	W1105 19:06:33.671964   73052 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:06:33.673107   73052 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-608095 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
E1105 19:06:37.462547   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:37.468944   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:37.480294   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:37.501658   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:37.543563   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:37.625045   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:37.786589   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:38.108296   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:38.750346   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:40.031946   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:42.593638   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:47.714954   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095: exit status 3 (18.440312269s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:06:52.115307   73936 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.10:22: connect: no route to host
	E1105 19:06:52.115334   73936 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.10:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-608095" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-459223 -n no-preload-459223
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-459223 -n no-preload-459223: exit status 3 (3.16805382s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:05:36.435328   73336 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host
	E1105 19:05:36.435350   73336 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-459223 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1105 19:05:38.871143   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-459223 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151814101s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-459223 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-459223 -n no-preload-459223
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-459223 -n no-preload-459223: exit status 3 (3.063781984s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:05:45.651366   73448 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host
	E1105 19:05:45.651400   73448 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-459223" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271881 -n embed-certs-271881
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271881 -n embed-certs-271881: exit status 3 (3.167855487s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:05:43.603293   73402 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E1105 19:05:43.603313   73402 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-271881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1105 19:05:44.869088   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-271881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151884416s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-271881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271881 -n embed-certs-271881
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271881 -n embed-certs-271881: exit status 3 (3.064027501s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:05:52.819326   73553 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E1105 19:05:52.819347   73553 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-271881" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-567666 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-567666 create -f testdata/busybox.yaml: exit status 1 (43.329838ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-567666" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-567666 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 6 (216.913559ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:05:51.864403   73641 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-567666" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666
E1105 19:05:52.080738   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:52.087123   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 6 (209.191205ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:05:52.076229   73671 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-567666" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (114.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-567666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1105 19:05:52.098603   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:52.120048   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:52.161514   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:52.242993   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:52.404832   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:52.726406   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-567666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m53.774491875s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-567666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-567666 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-567666 describe deploy/metrics-server -n kube-system: exit status 1 (47.118735ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-567666" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-567666 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 6 (218.676238ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:07:46.116617   74335 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-567666" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (114.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095: exit status 3 (3.167754864s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:06:55.283383   74031 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.10:22: connect: no route to host
	E1105 19:06:55.283405   74031 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.10:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-608095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1105 19:06:57.957010   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-608095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153102726s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.10:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-608095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095: exit status 3 (3.063167224s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 19:07:04.499372   74111 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.10:22: connect: no route to host
	E1105 19:07:04.499392   74111 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.10:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-608095" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (704.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-567666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1105 19:07:55.008669   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:59.402122   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:08:01.006952   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:08:02.901331   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:08:22.712673   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:08:28.711165   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:08:35.941393   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:08:43.864166   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:09:06.920946   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:09:21.323844   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:09:50.266209   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:10:05.695151   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:10:05.785550   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:10:17.968557   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:10:33.398247   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:10:52.080990   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:11:19.782951   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:11:37.462235   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:12:05.165184   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:12:21.924830   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:12:31.418928   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:12:49.627278   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:12:55.009519   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:13:01.006538   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:13:54.492786   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:14:06.921797   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:14:50.265863   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:15:05.695567   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-567666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m40.791037813s)

                                                
                                                
-- stdout --
	* [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-567666" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 19:07:52.649090   74485 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:07:52.649200   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649205   74485 out.go:358] Setting ErrFile to fd 2...
	I1105 19:07:52.649210   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649374   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:07:52.649909   74485 out.go:352] Setting JSON to false
	I1105 19:07:52.650785   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6615,"bootTime":1730827058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:07:52.650878   74485 start.go:139] virtualization: kvm guest
	I1105 19:07:52.652866   74485 out.go:177] * [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:07:52.654107   74485 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:07:52.654107   74485 notify.go:220] Checking for updates...
	I1105 19:07:52.655282   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:07:52.656379   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:07:52.657451   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:07:52.658694   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:07:52.659835   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:07:52.661251   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:07:52.661622   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.661660   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.677005   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I1105 19:07:52.677521   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.678096   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.678118   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.678489   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.678735   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.680466   74485 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1105 19:07:52.681734   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:07:52.682087   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.682139   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.697071   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1105 19:07:52.697503   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.697958   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.697980   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.698259   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.698439   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.732962   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 19:07:52.734079   74485 start.go:297] selected driver: kvm2
	I1105 19:07:52.734094   74485 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.734209   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:07:52.734912   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.735038   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:07:52.750214   74485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:07:52.750609   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:07:52.750641   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:07:52.750697   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:07:52.750745   74485 start.go:340] cluster config:
	{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.750877   74485 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.753288   74485 out.go:177] * Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	I1105 19:07:52.754354   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:07:52.754407   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 19:07:52.754425   74485 cache.go:56] Caching tarball of preloaded images
	I1105 19:07:52.754503   74485 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:07:52.754515   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 19:07:52.754610   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:07:52.754817   74485 start.go:360] acquireMachinesLock for old-k8s-version-567666: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:11:03.663928   74485 start.go:364] duration metric: took 3m10.909065205s to acquireMachinesLock for "old-k8s-version-567666"
	I1105 19:11:03.664023   74485 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:03.664038   74485 fix.go:54] fixHost starting: 
	I1105 19:11:03.664514   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:03.664569   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:03.682846   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I1105 19:11:03.683341   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:03.683786   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:11:03.683812   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:03.684219   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:03.684407   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:03.684552   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetState
	I1105 19:11:03.686262   74485 fix.go:112] recreateIfNeeded on old-k8s-version-567666: state=Stopped err=<nil>
	I1105 19:11:03.686295   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	W1105 19:11:03.686440   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:03.688047   74485 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-567666" ...
	I1105 19:11:03.689374   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .Start
	I1105 19:11:03.689560   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring networks are active...
	I1105 19:11:03.690290   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network default is active
	I1105 19:11:03.690659   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network mk-old-k8s-version-567666 is active
	I1105 19:11:03.691130   74485 main.go:141] libmachine: (old-k8s-version-567666) Getting domain xml...
	I1105 19:11:03.691890   74485 main.go:141] libmachine: (old-k8s-version-567666) Creating domain...
	I1105 19:11:05.006949   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting to get IP...
	I1105 19:11:05.008062   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.008547   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.008605   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.008523   75309 retry.go:31] will retry after 290.124771ms: waiting for machine to come up
	I1105 19:11:05.300185   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.300768   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.300803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.300717   75309 retry.go:31] will retry after 292.829683ms: waiting for machine to come up
	I1105 19:11:05.595365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.595881   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.595907   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.595831   75309 retry.go:31] will retry after 447.168257ms: waiting for machine to come up
	I1105 19:11:06.045320   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.045946   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.045976   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.045893   75309 retry.go:31] will retry after 420.272812ms: waiting for machine to come up
	I1105 19:11:06.467556   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.468012   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.468039   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.467962   75309 retry.go:31] will retry after 657.733497ms: waiting for machine to come up
	I1105 19:11:07.128022   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:07.128531   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:07.128559   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:07.128484   75309 retry.go:31] will retry after 922.664226ms: waiting for machine to come up
	I1105 19:11:08.053120   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:08.053610   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:08.053636   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:08.053587   75309 retry.go:31] will retry after 947.415519ms: waiting for machine to come up
	I1105 19:11:09.002803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:09.003423   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:09.003452   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:09.003363   75309 retry.go:31] will retry after 1.07978111s: waiting for machine to come up
	I1105 19:11:10.084404   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:10.084808   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:10.084830   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:10.084784   75309 retry.go:31] will retry after 1.482510322s: waiting for machine to come up
	I1105 19:11:11.568421   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:11.568840   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:11.568869   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:11.568791   75309 retry.go:31] will retry after 1.630983434s: waiting for machine to come up
	I1105 19:11:13.201891   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:13.202425   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:13.202453   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:13.202387   75309 retry.go:31] will retry after 2.689744765s: waiting for machine to come up
	I1105 19:11:15.893632   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:15.893989   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:15.894034   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:15.893964   75309 retry.go:31] will retry after 2.460566804s: waiting for machine to come up
	I1105 19:11:18.357643   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:18.358065   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:18.358099   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:18.358009   75309 retry.go:31] will retry after 3.036834524s: waiting for machine to come up
	I1105 19:11:21.398221   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398763   74485 main.go:141] libmachine: (old-k8s-version-567666) Found IP for machine: 192.168.61.125
	I1105 19:11:21.398825   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has current primary IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398843   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserving static IP address...
	I1105 19:11:21.399327   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.399350   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserved static IP address: 192.168.61.125
	I1105 19:11:21.399365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | skip adding static IP to network mk-old-k8s-version-567666 - found existing host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"}
	I1105 19:11:21.399379   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Getting to WaitForSSH function...
	I1105 19:11:21.399394   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting for SSH to be available...
	I1105 19:11:21.401270   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401664   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.401691   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401866   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH client type: external
	I1105 19:11:21.401897   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa (-rw-------)
	I1105 19:11:21.401935   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:21.401949   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | About to run SSH command:
	I1105 19:11:21.401959   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | exit 0
	I1105 19:11:21.527815   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:21.528165   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:11:21.528874   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.531373   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531647   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.531672   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531876   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:11:21.532071   74485 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:21.532092   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:21.532332   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.534177   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534431   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.534465   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534556   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.534716   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534845   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534960   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.535142   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.535329   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.535341   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:21.643321   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:21.643354   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643618   74485 buildroot.go:166] provisioning hostname "old-k8s-version-567666"
	I1105 19:11:21.643646   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643812   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.646230   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646628   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.646666   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.647037   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647167   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647290   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.647421   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.647579   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.647592   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-567666 && echo "old-k8s-version-567666" | sudo tee /etc/hostname
	I1105 19:11:21.770209   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-567666
	
	I1105 19:11:21.770255   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.772932   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773314   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.773346   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773484   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.773691   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773950   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.774121   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.774357   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.774386   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-567666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-567666/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-567666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:21.890834   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:21.890860   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:21.890915   74485 buildroot.go:174] setting up certificates
	I1105 19:11:21.890929   74485 provision.go:84] configureAuth start
	I1105 19:11:21.890944   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.891224   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.893835   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894256   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.894285   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.896436   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896699   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.896715   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896893   74485 provision.go:143] copyHostCerts
	I1105 19:11:21.896951   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:21.896967   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:21.897037   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:21.897163   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:21.897176   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:21.897205   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:21.897279   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:21.897289   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:21.897315   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:21.897396   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-567666 san=[127.0.0.1 192.168.61.125 localhost minikube old-k8s-version-567666]
	I1105 19:11:21.962153   74485 provision.go:177] copyRemoteCerts
	I1105 19:11:21.962219   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:21.962257   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.964765   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965125   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.965166   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965330   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.965478   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.965603   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.965746   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.048519   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:22.072975   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1105 19:11:22.098263   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:22.120258   74485 provision.go:87] duration metric: took 229.316972ms to configureAuth
	I1105 19:11:22.120285   74485 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:22.120444   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:11:22.120516   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.123859   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124309   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.124344   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124536   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.124737   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.124922   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.125055   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.125213   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.125375   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.125388   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:22.349922   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:22.349964   74485 machine.go:96] duration metric: took 817.87332ms to provisionDockerMachine
	I1105 19:11:22.349979   74485 start.go:293] postStartSetup for "old-k8s-version-567666" (driver="kvm2")
	I1105 19:11:22.349992   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:22.350014   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.350350   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:22.350385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.352922   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353310   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.353332   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353459   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.353638   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.353807   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.353921   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.437482   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:22.441617   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:22.441646   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:22.441711   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:22.441807   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:22.441929   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:22.451016   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:22.474199   74485 start.go:296] duration metric: took 124.207336ms for postStartSetup
	I1105 19:11:22.474233   74485 fix.go:56] duration metric: took 18.810197154s for fixHost
	I1105 19:11:22.474269   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.476786   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477119   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.477157   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477279   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.477471   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477621   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477753   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.477910   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.478070   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.478081   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:22.583343   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833882.558222038
	
	I1105 19:11:22.583363   74485 fix.go:216] guest clock: 1730833882.558222038
	I1105 19:11:22.583372   74485 fix.go:229] Guest: 2024-11-05 19:11:22.558222038 +0000 UTC Remote: 2024-11-05 19:11:22.474236871 +0000 UTC m=+209.862783450 (delta=83.985167ms)
	I1105 19:11:22.583418   74485 fix.go:200] guest clock delta is within tolerance: 83.985167ms
	I1105 19:11:22.583429   74485 start.go:83] releasing machines lock for "old-k8s-version-567666", held for 18.919444623s
	I1105 19:11:22.583460   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.583717   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:22.586183   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586479   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.586509   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586687   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587137   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587310   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587400   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:22.587448   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.587521   74485 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:22.587548   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.590145   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590474   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.590507   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590530   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590655   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.590831   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.590995   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.591010   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591037   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.591179   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.591286   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.591438   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.591558   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591702   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.702707   74485 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:22.708965   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:22.856764   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:22.863791   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:22.863866   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:22.883997   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:22.884022   74485 start.go:495] detecting cgroup driver to use...
	I1105 19:11:22.884094   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:22.901499   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:22.919358   74485 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:22.919422   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:22.936964   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:22.953538   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:23.077720   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:23.218316   74485 docker.go:233] disabling docker service ...
	I1105 19:11:23.218390   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:23.238316   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:23.251814   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:23.427386   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:23.552928   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:23.567149   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:23.587241   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1105 19:11:23.587307   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.597558   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:23.597620   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.607466   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.616794   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.626425   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:23.637121   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:23.649243   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:23.649305   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:23.664648   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:23.675060   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:23.812636   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:23.903326   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:23.903404   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:23.908377   74485 start.go:563] Will wait 60s for crictl version
	I1105 19:11:23.908434   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:23.912163   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:23.961712   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:23.961794   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:23.992951   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:24.032041   74485 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1105 19:11:24.033400   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:24.036549   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037128   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:24.037165   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037346   74485 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:24.042641   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:24.055174   74485 kubeadm.go:883] updating cluster {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:24.055327   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:11:24.055388   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:24.101655   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:24.101724   74485 ssh_runner.go:195] Run: which lz4
	I1105 19:11:24.105618   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:24.109705   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:24.109735   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1105 19:11:25.602158   74485 crio.go:462] duration metric: took 1.496564307s to copy over tarball
	I1105 19:11:25.602236   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:28.701223   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.098952901s)
	I1105 19:11:28.701253   74485 crio.go:469] duration metric: took 3.099065633s to extract the tarball
	I1105 19:11:28.701263   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:28.744214   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:28.778845   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:28.778868   74485 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:28.778962   74485 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:28.778945   74485 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.779024   74485 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.779039   74485 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.778939   74485 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.779067   74485 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.779083   74485 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.778957   74485 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781024   74485 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781003   74485 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.781052   74485 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.781002   74485 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.781088   74485 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.781114   74485 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.013637   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1105 19:11:29.043928   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.043936   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.044140   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.045892   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.046313   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.055792   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.081724   74485 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1105 19:11:29.081779   74485 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1105 19:11:29.081826   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.234925   74485 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1105 19:11:29.234966   74485 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.235046   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235079   74485 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1105 19:11:29.235112   74485 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.235136   74485 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1105 19:11:29.235152   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235167   74485 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.235200   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235238   74485 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1105 19:11:29.235277   74485 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.235298   74485 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1105 19:11:29.235320   74485 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.235333   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235352   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235351   74485 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1105 19:11:29.235385   74485 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.235415   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235426   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.251873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.251960   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.251985   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.252000   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.371298   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.415548   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.415592   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.415654   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.415710   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.415791   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.415868   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.466873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.544593   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.544660   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.586695   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.586714   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.586812   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.586916   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.606582   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1105 19:11:29.707767   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1105 19:11:29.707803   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1105 19:11:29.716195   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1105 19:11:29.723097   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1105 19:11:30.039971   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:30.182760   74485 cache_images.go:92] duration metric: took 1.403874987s to LoadCachedImages
	W1105 19:11:30.182890   74485 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1105 19:11:30.182912   74485 kubeadm.go:934] updating node { 192.168.61.125 8443 v1.20.0 crio true true} ...
	I1105 19:11:30.183052   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-567666 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:30.183146   74485 ssh_runner.go:195] Run: crio config
	I1105 19:11:30.235206   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:11:30.235241   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:30.235253   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:30.235277   74485 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-567666 NodeName:old-k8s-version-567666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1105 19:11:30.235433   74485 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-567666"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:30.235503   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1105 19:11:30.245189   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:30.245263   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:30.254772   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1105 19:11:30.271711   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:30.288568   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1105 19:11:30.309098   74485 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:30.313211   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:30.325637   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:30.447346   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:30.466863   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666 for IP: 192.168.61.125
	I1105 19:11:30.466884   74485 certs.go:194] generating shared ca certs ...
	I1105 19:11:30.466898   74485 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:30.467086   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:30.467152   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:30.467165   74485 certs.go:256] generating profile certs ...
	I1105 19:11:30.467322   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key
	I1105 19:11:30.467398   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8
	I1105 19:11:30.467448   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key
	I1105 19:11:30.467614   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:30.467656   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:30.467676   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:30.467722   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:30.467759   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:30.467788   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:30.467847   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:30.468756   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:30.532325   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:30.559936   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:30.592995   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:30.632421   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 19:11:30.662285   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:11:30.696292   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:30.725642   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:30.750231   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:30.773213   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:30.796269   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:30.820261   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:30.837059   74485 ssh_runner.go:195] Run: openssl version
	I1105 19:11:30.842937   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:30.855033   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859637   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859720   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.865747   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:30.877678   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:30.890762   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895576   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895642   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.901686   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:30.912689   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:30.923800   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928911   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928984   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.934782   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:30.947059   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:30.951934   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:30.958065   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:30.965341   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:30.971725   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:30.977606   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:30.983486   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:30.989212   74485 kubeadm.go:392] StartCluster: {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:30.989350   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:30.989411   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.031794   74485 cri.go:89] found id: ""
	I1105 19:11:31.031884   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:31.043178   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:31.043202   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:31.043291   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:31.054102   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:31.055256   74485 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:31.055924   74485 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-567666" cluster setting kubeconfig missing "old-k8s-version-567666" context setting]
	I1105 19:11:31.056913   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:31.064220   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:31.074582   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.125
	I1105 19:11:31.074618   74485 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:31.074628   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:31.074706   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.111157   74485 cri.go:89] found id: ""
	I1105 19:11:31.111241   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:31.130027   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:31.139917   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:31.139939   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:31.140007   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:31.150790   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:31.150868   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:31.161397   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:31.170394   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:31.170462   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:31.179594   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.188892   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:31.188952   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.199840   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:31.209166   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:31.209244   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:31.219687   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:31.231079   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:31.350667   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.094565   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.334807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.457538   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.534503   74485 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:32.534596   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:33.034690   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:33.535594   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.035526   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.534836   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.034947   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.535108   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.035417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.535438   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.034766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.535415   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:38.035553   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:38.534702   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.035332   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.534749   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.034989   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.535354   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.035624   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.534847   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.035293   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.535363   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.035199   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.534769   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.035551   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.535664   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.035103   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.535581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.035077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.535660   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.035462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.534898   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.035320   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.535496   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.035636   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.535445   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.035499   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.535722   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.035700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.535310   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.035585   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.535468   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.034919   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.535697   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.035353   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.534669   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.034957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.534747   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.035331   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.534699   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:58.034948   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:58.534748   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.034961   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.535634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.035311   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.534756   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.035266   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.535256   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.035489   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.534701   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:03.034795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:03.534764   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.034833   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.534795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.034815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.534885   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.535327   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.035253   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.535011   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:08.035104   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:08.534784   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.035198   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.535319   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.035258   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.534634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.035604   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.535077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.035096   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:13.035100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:13.534793   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.035120   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.535318   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.035062   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.535127   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.034840   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.534830   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.035105   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.534928   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:18.035126   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:18.535446   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.035154   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.535413   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.035580   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.534802   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.035030   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.535250   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.034785   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.534700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.034721   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.534672   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.035358   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.534813   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.535342   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.034934   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.534766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.035389   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.534831   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:28.035226   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:28.535577   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.034984   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.535633   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.035509   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.534907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.535421   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.034719   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.534952   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:32.535067   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:32.575052   74485 cri.go:89] found id: ""
	I1105 19:12:32.575085   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.575096   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:32.575104   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:32.575164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:32.609969   74485 cri.go:89] found id: ""
	I1105 19:12:32.610003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.610011   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:32.610017   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:32.610065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:32.642343   74485 cri.go:89] found id: ""
	I1105 19:12:32.642369   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.642376   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:32.642381   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:32.642426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:32.680144   74485 cri.go:89] found id: ""
	I1105 19:12:32.680177   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.680188   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:32.680196   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:32.680270   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:32.715216   74485 cri.go:89] found id: ""
	I1105 19:12:32.715248   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.715259   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:32.715267   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:32.715321   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:32.751742   74485 cri.go:89] found id: ""
	I1105 19:12:32.751771   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.751795   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:32.751803   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:32.751865   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:32.786944   74485 cri.go:89] found id: ""
	I1105 19:12:32.787003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.787015   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:32.787023   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:32.787080   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:32.820523   74485 cri.go:89] found id: ""
	I1105 19:12:32.820550   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.820557   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:32.820565   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:32.820575   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:32.873960   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:32.874000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:32.889268   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:32.889296   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:33.011825   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:33.011846   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:33.011862   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:33.082785   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:33.082827   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:35.630678   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:35.644410   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:35.644492   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:35.679567   74485 cri.go:89] found id: ""
	I1105 19:12:35.679598   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.679607   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:35.679613   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:35.679666   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:35.713685   74485 cri.go:89] found id: ""
	I1105 19:12:35.713713   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.713721   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:35.713726   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:35.713789   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:35.749496   74485 cri.go:89] found id: ""
	I1105 19:12:35.749525   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.749536   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:35.749543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:35.749611   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:35.784228   74485 cri.go:89] found id: ""
	I1105 19:12:35.784254   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.784263   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:35.784269   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:35.784317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:35.818620   74485 cri.go:89] found id: ""
	I1105 19:12:35.818680   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.818696   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:35.818703   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:35.818769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:35.852525   74485 cri.go:89] found id: ""
	I1105 19:12:35.852554   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.852566   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:35.852574   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:35.852648   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:35.887906   74485 cri.go:89] found id: ""
	I1105 19:12:35.887931   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.887939   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:35.887944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:35.887994   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:35.920566   74485 cri.go:89] found id: ""
	I1105 19:12:35.920594   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.920602   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:35.920612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:35.920627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:35.972706   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:35.972742   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:35.986114   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:35.986141   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:36.067016   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:36.067044   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:36.067060   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:36.158947   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:36.159003   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:38.700738   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:38.713280   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:38.713351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:38.747293   74485 cri.go:89] found id: ""
	I1105 19:12:38.747335   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.747347   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:38.747355   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:38.747414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:38.781607   74485 cri.go:89] found id: ""
	I1105 19:12:38.781635   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.781643   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:38.781648   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:38.781703   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:38.815303   74485 cri.go:89] found id: ""
	I1105 19:12:38.815333   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.815342   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:38.815348   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:38.815397   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:38.850128   74485 cri.go:89] found id: ""
	I1105 19:12:38.850156   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.850166   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:38.850174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:38.850233   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:38.882470   74485 cri.go:89] found id: ""
	I1105 19:12:38.882493   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.882500   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:38.882506   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:38.882563   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:38.914669   74485 cri.go:89] found id: ""
	I1105 19:12:38.914698   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.914706   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:38.914713   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:38.914762   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:38.946521   74485 cri.go:89] found id: ""
	I1105 19:12:38.946548   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.946556   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:38.946561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:38.946613   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:38.979628   74485 cri.go:89] found id: ""
	I1105 19:12:38.979655   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.979663   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:38.979672   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:38.979682   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:39.056066   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:39.056102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.092303   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:39.092333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:39.143754   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:39.143790   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:39.156553   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:39.156587   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:39.220882   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:41.721766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:41.734823   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:41.734893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:41.768636   74485 cri.go:89] found id: ""
	I1105 19:12:41.768668   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.768685   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:41.768693   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:41.768750   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:41.809506   74485 cri.go:89] found id: ""
	I1105 19:12:41.809533   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.809541   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:41.809546   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:41.809606   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:41.849953   74485 cri.go:89] found id: ""
	I1105 19:12:41.849977   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.849985   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:41.849991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:41.850037   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:41.893042   74485 cri.go:89] found id: ""
	I1105 19:12:41.893072   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.893084   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:41.893091   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:41.893152   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:41.936259   74485 cri.go:89] found id: ""
	I1105 19:12:41.936282   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.936292   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:41.936298   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:41.936347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:41.970322   74485 cri.go:89] found id: ""
	I1105 19:12:41.970344   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.970353   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:41.970360   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:41.970427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:42.004351   74485 cri.go:89] found id: ""
	I1105 19:12:42.004375   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.004383   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:42.004388   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:42.004443   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:42.035136   74485 cri.go:89] found id: ""
	I1105 19:12:42.035163   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.035174   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:42.035185   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:42.035201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:42.086760   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:42.086801   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:42.100795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:42.100829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:42.167480   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:42.167509   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:42.167529   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:42.248625   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:42.248664   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:44.785100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:44.798182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:44.798248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:44.834080   74485 cri.go:89] found id: ""
	I1105 19:12:44.834107   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.834115   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:44.834120   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:44.834179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:44.870572   74485 cri.go:89] found id: ""
	I1105 19:12:44.870602   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.870613   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:44.870620   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:44.870691   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:44.908960   74485 cri.go:89] found id: ""
	I1105 19:12:44.908991   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.909002   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:44.909010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:44.909075   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:44.945310   74485 cri.go:89] found id: ""
	I1105 19:12:44.945342   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.945350   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:44.945355   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:44.945409   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:44.982893   74485 cri.go:89] found id: ""
	I1105 19:12:44.982935   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.982946   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:44.982953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:44.983030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:45.015529   74485 cri.go:89] found id: ""
	I1105 19:12:45.015559   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.015571   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:45.015578   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:45.015640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:45.047252   74485 cri.go:89] found id: ""
	I1105 19:12:45.047284   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.047295   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:45.047302   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:45.047364   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:45.082963   74485 cri.go:89] found id: ""
	I1105 19:12:45.083009   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.083018   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:45.083026   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:45.083039   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:45.131844   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:45.131881   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:45.145500   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:45.145530   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:45.214668   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:45.214709   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:45.214725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:45.291203   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:45.291243   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:47.831908   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:47.844873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:47.844957   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:47.881587   74485 cri.go:89] found id: ""
	I1105 19:12:47.881617   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.881628   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:47.881644   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:47.881714   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:47.918381   74485 cri.go:89] found id: ""
	I1105 19:12:47.918411   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.918423   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:47.918430   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:47.918491   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:47.950835   74485 cri.go:89] found id: ""
	I1105 19:12:47.950864   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.950880   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:47.950889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:47.950947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:47.985234   74485 cri.go:89] found id: ""
	I1105 19:12:47.985261   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.985272   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:47.985279   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:47.985338   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:48.019406   74485 cri.go:89] found id: ""
	I1105 19:12:48.019437   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.019448   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:48.019455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:48.019532   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:48.053126   74485 cri.go:89] found id: ""
	I1105 19:12:48.053160   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.053172   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:48.053180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:48.053241   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:48.086847   74485 cri.go:89] found id: ""
	I1105 19:12:48.086872   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.086879   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:48.086885   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:48.086944   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:48.122366   74485 cri.go:89] found id: ""
	I1105 19:12:48.122388   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.122396   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:48.122404   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:48.122421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:48.171579   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:48.171622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:48.185207   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:48.185234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:48.249553   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:48.249575   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:48.249586   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:48.323391   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:48.323427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:50.861939   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:50.874943   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:50.875041   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:50.911498   74485 cri.go:89] found id: ""
	I1105 19:12:50.911522   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.911530   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:50.911536   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:50.911591   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:50.946936   74485 cri.go:89] found id: ""
	I1105 19:12:50.946962   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.946988   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:50.947034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:50.947098   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:50.983220   74485 cri.go:89] found id: ""
	I1105 19:12:50.983246   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.983258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:50.983265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:50.983314   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:51.017052   74485 cri.go:89] found id: ""
	I1105 19:12:51.017078   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.017086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:51.017092   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:51.017141   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:51.051417   74485 cri.go:89] found id: ""
	I1105 19:12:51.051448   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.051459   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:51.051466   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:51.051529   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:51.085129   74485 cri.go:89] found id: ""
	I1105 19:12:51.085164   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.085177   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:51.085182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:51.085232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:51.122065   74485 cri.go:89] found id: ""
	I1105 19:12:51.122100   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.122113   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:51.122120   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:51.122178   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:51.154909   74485 cri.go:89] found id: ""
	I1105 19:12:51.154938   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.154946   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:51.154954   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:51.154966   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:51.167768   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:51.167798   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:51.231849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:51.231873   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:51.231897   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:51.314426   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:51.314487   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:51.356654   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:51.356685   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:53.911774   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:53.924884   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:53.924968   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:53.957690   74485 cri.go:89] found id: ""
	I1105 19:12:53.957719   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.957729   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:53.957737   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:53.957802   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:53.990717   74485 cri.go:89] found id: ""
	I1105 19:12:53.990744   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.990751   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:53.990757   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:53.990803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:54.023229   74485 cri.go:89] found id: ""
	I1105 19:12:54.023251   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.023258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:54.023263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:54.023320   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:54.056950   74485 cri.go:89] found id: ""
	I1105 19:12:54.056977   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.056987   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:54.056995   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:54.057056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:54.091729   74485 cri.go:89] found id: ""
	I1105 19:12:54.091756   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.091768   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:54.091776   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:54.091828   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:54.123964   74485 cri.go:89] found id: ""
	I1105 19:12:54.123991   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.124001   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:54.124009   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:54.124070   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:54.155164   74485 cri.go:89] found id: ""
	I1105 19:12:54.155195   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.155204   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:54.155209   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:54.155268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:54.188161   74485 cri.go:89] found id: ""
	I1105 19:12:54.188191   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.188202   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:54.188213   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:54.188226   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:54.240906   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:54.240941   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:54.254061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:54.254093   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:54.321973   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:54.322007   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:54.322026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:54.405106   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:54.405147   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:56.941801   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:56.954658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:56.954741   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:56.990372   74485 cri.go:89] found id: ""
	I1105 19:12:56.990400   74485 logs.go:282] 0 containers: []
	W1105 19:12:56.990411   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:56.990419   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:56.990479   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:57.023047   74485 cri.go:89] found id: ""
	I1105 19:12:57.023082   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.023093   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:57.023102   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:57.023163   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:57.054991   74485 cri.go:89] found id: ""
	I1105 19:12:57.055021   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.055030   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:57.055036   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:57.055094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:57.086182   74485 cri.go:89] found id: ""
	I1105 19:12:57.086214   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.086225   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:57.086233   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:57.086295   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:57.120322   74485 cri.go:89] found id: ""
	I1105 19:12:57.120350   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.120361   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:57.120368   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:57.120431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:57.153751   74485 cri.go:89] found id: ""
	I1105 19:12:57.153781   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.153790   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:57.153796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:57.153845   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:57.189208   74485 cri.go:89] found id: ""
	I1105 19:12:57.189234   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.189244   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:57.189251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:57.189317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:57.223259   74485 cri.go:89] found id: ""
	I1105 19:12:57.223292   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.223301   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:57.223308   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:57.223320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:57.273063   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:57.273098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:57.287759   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:57.287783   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:57.353387   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:57.353409   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:57.353421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:57.426374   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:57.426411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:59.965907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:59.979081   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:59.979149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:00.010955   74485 cri.go:89] found id: ""
	I1105 19:13:00.011001   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.011012   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:00.011021   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:00.011081   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:00.044800   74485 cri.go:89] found id: ""
	I1105 19:13:00.044825   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.044832   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:00.044838   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:00.044894   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:00.082999   74485 cri.go:89] found id: ""
	I1105 19:13:00.083040   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.083050   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:00.083059   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:00.083125   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:00.120792   74485 cri.go:89] found id: ""
	I1105 19:13:00.120826   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.120835   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:00.120840   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:00.120903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:00.153156   74485 cri.go:89] found id: ""
	I1105 19:13:00.153188   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.153200   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:00.153207   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:00.153273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:00.189039   74485 cri.go:89] found id: ""
	I1105 19:13:00.189066   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.189073   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:00.189079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:00.189143   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:00.220904   74485 cri.go:89] found id: ""
	I1105 19:13:00.220932   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.220942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:00.220950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:00.221012   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:00.255414   74485 cri.go:89] found id: ""
	I1105 19:13:00.255443   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.255454   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:00.255464   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:00.255480   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:00.329027   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:00.329050   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:00.329061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:00.405813   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:00.405847   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:00.443302   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:00.443332   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:00.498413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:00.498452   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:03.011897   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:03.025351   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:03.025419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:03.058881   74485 cri.go:89] found id: ""
	I1105 19:13:03.058910   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.058920   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:03.058928   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:03.059018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:03.093549   74485 cri.go:89] found id: ""
	I1105 19:13:03.093580   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.093592   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:03.093600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:03.093660   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:03.132355   74485 cri.go:89] found id: ""
	I1105 19:13:03.132384   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.132395   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:03.132402   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:03.132463   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:03.164832   74485 cri.go:89] found id: ""
	I1105 19:13:03.164864   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.164875   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:03.164888   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:03.164947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:03.203187   74485 cri.go:89] found id: ""
	I1105 19:13:03.203213   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.203221   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:03.203226   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:03.203282   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:03.238867   74485 cri.go:89] found id: ""
	I1105 19:13:03.238899   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.238921   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:03.238928   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:03.239010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:03.276139   74485 cri.go:89] found id: ""
	I1105 19:13:03.276174   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.276187   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:03.276195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:03.276251   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:03.312588   74485 cri.go:89] found id: ""
	I1105 19:13:03.312613   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.312631   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:03.312639   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:03.312650   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:03.379754   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:03.379782   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:03.379797   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:03.455719   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:03.455754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.493428   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:03.493458   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:03.545447   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:03.545481   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.060213   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:06.074756   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:06.074831   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:06.111392   74485 cri.go:89] found id: ""
	I1105 19:13:06.111421   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.111429   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:06.111435   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:06.111493   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:06.147535   74485 cri.go:89] found id: ""
	I1105 19:13:06.147568   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.147579   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:06.147585   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:06.147646   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:06.183176   74485 cri.go:89] found id: ""
	I1105 19:13:06.183198   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.183205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:06.183211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:06.183262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:06.213957   74485 cri.go:89] found id: ""
	I1105 19:13:06.213983   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.213992   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:06.213997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:06.214060   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:06.251199   74485 cri.go:89] found id: ""
	I1105 19:13:06.251227   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.251234   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:06.251240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:06.251297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:06.288128   74485 cri.go:89] found id: ""
	I1105 19:13:06.288157   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.288167   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:06.288174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:06.288236   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:06.325265   74485 cri.go:89] found id: ""
	I1105 19:13:06.325296   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.325306   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:06.325314   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:06.325375   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:06.359649   74485 cri.go:89] found id: ""
	I1105 19:13:06.359689   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.359700   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:06.359710   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:06.359725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:06.408423   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:06.408456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.421776   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:06.421804   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:06.487464   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:06.487493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:06.487507   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:06.565789   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:06.565829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:09.104578   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:09.117930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:09.118022   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:09.156055   74485 cri.go:89] found id: ""
	I1105 19:13:09.156083   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.156093   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:09.156101   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:09.156161   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:09.190470   74485 cri.go:89] found id: ""
	I1105 19:13:09.190499   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.190509   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:09.190516   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:09.190576   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:09.222568   74485 cri.go:89] found id: ""
	I1105 19:13:09.222595   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.222606   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:09.222612   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:09.222677   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:09.260251   74485 cri.go:89] found id: ""
	I1105 19:13:09.260282   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.260292   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:09.260300   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:09.260362   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:09.296006   74485 cri.go:89] found id: ""
	I1105 19:13:09.296036   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.296047   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:09.296054   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:09.296118   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:09.331213   74485 cri.go:89] found id: ""
	I1105 19:13:09.331246   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.331257   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:09.331265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:09.331333   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:09.364286   74485 cri.go:89] found id: ""
	I1105 19:13:09.364316   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.364327   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:09.364335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:09.364445   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:09.398060   74485 cri.go:89] found id: ""
	I1105 19:13:09.398084   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.398092   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:09.398101   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:09.398113   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:09.447373   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:09.447409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:09.461483   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:09.461514   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:09.528213   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:09.528236   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:09.528248   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:09.607397   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:09.607430   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.146158   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:12.159183   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:12.159262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:12.193917   74485 cri.go:89] found id: ""
	I1105 19:13:12.193952   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.193963   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:12.193971   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:12.194036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:12.226558   74485 cri.go:89] found id: ""
	I1105 19:13:12.226585   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.226594   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:12.226600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:12.226662   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:12.258437   74485 cri.go:89] found id: ""
	I1105 19:13:12.258469   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.258481   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:12.258488   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:12.258557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:12.291308   74485 cri.go:89] found id: ""
	I1105 19:13:12.291341   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.291353   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:12.291361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:12.291431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:12.325768   74485 cri.go:89] found id: ""
	I1105 19:13:12.325801   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.325812   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:12.325819   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:12.325884   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:12.361077   74485 cri.go:89] found id: ""
	I1105 19:13:12.361100   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.361108   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:12.361118   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:12.361179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:12.394769   74485 cri.go:89] found id: ""
	I1105 19:13:12.394791   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.394800   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:12.394806   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:12.394864   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:12.430138   74485 cri.go:89] found id: ""
	I1105 19:13:12.430167   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.430177   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:12.430189   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:12.430200   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.472596   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:12.472637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:12.523107   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:12.523143   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:12.535797   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:12.535824   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:12.604088   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:12.604108   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:12.604123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:15.185725   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:15.200158   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:15.200238   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:15.238309   74485 cri.go:89] found id: ""
	I1105 19:13:15.238334   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.238342   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:15.238349   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:15.238404   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:15.272897   74485 cri.go:89] found id: ""
	I1105 19:13:15.272927   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.272938   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:15.272945   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:15.273013   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:15.307700   74485 cri.go:89] found id: ""
	I1105 19:13:15.307726   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.307737   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:15.307744   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:15.307810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:15.340156   74485 cri.go:89] found id: ""
	I1105 19:13:15.340182   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.340196   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:15.340202   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:15.340252   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:15.375930   74485 cri.go:89] found id: ""
	I1105 19:13:15.375963   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.375971   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:15.375976   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:15.376031   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:15.409876   74485 cri.go:89] found id: ""
	I1105 19:13:15.409905   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.409915   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:15.409922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:15.409984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:15.442781   74485 cri.go:89] found id: ""
	I1105 19:13:15.442808   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.442819   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:15.442825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:15.442896   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:15.480578   74485 cri.go:89] found id: ""
	I1105 19:13:15.480606   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.480614   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:15.480623   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:15.480634   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:15.530910   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:15.530952   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:15.544351   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:15.544382   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:15.618345   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:15.618373   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:15.618396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:15.704408   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:15.704451   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:18.244882   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:18.258667   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:18.258758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:18.292140   74485 cri.go:89] found id: ""
	I1105 19:13:18.292163   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.292171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:18.292178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:18.292235   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:18.324954   74485 cri.go:89] found id: ""
	I1105 19:13:18.324979   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.324985   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:18.324991   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:18.325048   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:18.361943   74485 cri.go:89] found id: ""
	I1105 19:13:18.361972   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.361983   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:18.361991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:18.362062   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:18.396012   74485 cri.go:89] found id: ""
	I1105 19:13:18.396036   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.396044   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:18.396050   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:18.396097   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:18.428852   74485 cri.go:89] found id: ""
	I1105 19:13:18.428875   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.428883   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:18.428889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:18.428946   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:18.464364   74485 cri.go:89] found id: ""
	I1105 19:13:18.464390   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.464397   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:18.464404   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:18.464464   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:18.496478   74485 cri.go:89] found id: ""
	I1105 19:13:18.496505   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.496514   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:18.496519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:18.496577   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:18.530313   74485 cri.go:89] found id: ""
	I1105 19:13:18.530339   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.530348   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:18.530356   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:18.530368   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:18.582593   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:18.582627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:18.596580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:18.596616   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:18.663920   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:18.663959   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:18.663974   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:18.740706   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:18.740746   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.281614   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:21.295841   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:21.295919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:21.330832   74485 cri.go:89] found id: ""
	I1105 19:13:21.330856   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.330864   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:21.330869   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:21.330922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:21.365228   74485 cri.go:89] found id: ""
	I1105 19:13:21.365257   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.365265   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:21.365269   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:21.365317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:21.418675   74485 cri.go:89] found id: ""
	I1105 19:13:21.418702   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.418719   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:21.418727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:21.418793   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:21.453966   74485 cri.go:89] found id: ""
	I1105 19:13:21.453994   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.454003   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:21.454008   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:21.454058   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:21.492030   74485 cri.go:89] found id: ""
	I1105 19:13:21.492056   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.492067   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:21.492078   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:21.492128   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:21.529146   74485 cri.go:89] found id: ""
	I1105 19:13:21.529174   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.529183   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:21.529190   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:21.529250   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:21.566491   74485 cri.go:89] found id: ""
	I1105 19:13:21.566519   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.566528   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:21.566533   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:21.566595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:21.605720   74485 cri.go:89] found id: ""
	I1105 19:13:21.605745   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.605754   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:21.605762   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:21.605772   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:21.682385   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:21.682408   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:21.682420   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:21.764519   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:21.764557   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.805090   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:21.805117   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:21.857560   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:21.857593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:24.371420   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:24.384566   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:24.384634   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:24.416283   74485 cri.go:89] found id: ""
	I1105 19:13:24.416308   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.416319   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:24.416327   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:24.416388   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:24.452875   74485 cri.go:89] found id: ""
	I1105 19:13:24.452899   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.452907   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:24.452913   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:24.452964   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:24.489946   74485 cri.go:89] found id: ""
	I1105 19:13:24.489974   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.489992   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:24.490000   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:24.490056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:24.527348   74485 cri.go:89] found id: ""
	I1105 19:13:24.527377   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.527388   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:24.527395   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:24.527451   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:24.558992   74485 cri.go:89] found id: ""
	I1105 19:13:24.559024   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.559035   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:24.559047   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:24.559105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:24.591405   74485 cri.go:89] found id: ""
	I1105 19:13:24.591437   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.591448   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:24.591455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:24.591516   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.625002   74485 cri.go:89] found id: ""
	I1105 19:13:24.625031   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.625040   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:24.625048   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:24.625114   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:24.657867   74485 cri.go:89] found id: ""
	I1105 19:13:24.657896   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.657907   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:24.657918   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:24.657931   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:24.708444   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:24.708482   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:24.721771   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:24.721814   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:24.793946   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:24.793980   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:24.793996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:24.875130   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:24.875167   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:27.412872   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:27.426996   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:27.427072   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:27.462434   74485 cri.go:89] found id: ""
	I1105 19:13:27.462458   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.462468   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:27.462475   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:27.462536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:27.496916   74485 cri.go:89] found id: ""
	I1105 19:13:27.496951   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.496962   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:27.496969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:27.497035   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:27.528826   74485 cri.go:89] found id: ""
	I1105 19:13:27.528853   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.528861   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:27.528867   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:27.528919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:27.563164   74485 cri.go:89] found id: ""
	I1105 19:13:27.563193   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.563204   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:27.563210   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:27.563284   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:27.600136   74485 cri.go:89] found id: ""
	I1105 19:13:27.600164   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.600174   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:27.600180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:27.600247   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:27.634326   74485 cri.go:89] found id: ""
	I1105 19:13:27.634358   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.634368   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:27.634377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:27.634452   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:27.668154   74485 cri.go:89] found id: ""
	I1105 19:13:27.668185   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.668196   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:27.668203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:27.668263   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:27.706016   74485 cri.go:89] found id: ""
	I1105 19:13:27.706043   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.706051   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:27.706059   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:27.706071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:27.755890   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:27.755929   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:27.773038   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:27.773063   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:27.863392   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:27.863414   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:27.863429   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:27.949149   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:27.949185   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.489333   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:30.502794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:30.502878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:30.536263   74485 cri.go:89] found id: ""
	I1105 19:13:30.536289   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.536297   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:30.536302   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:30.536347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:30.570418   74485 cri.go:89] found id: ""
	I1105 19:13:30.570445   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.570455   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:30.570462   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:30.570523   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:30.601972   74485 cri.go:89] found id: ""
	I1105 19:13:30.602003   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.602013   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:30.602020   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:30.602086   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:30.634151   74485 cri.go:89] found id: ""
	I1105 19:13:30.634183   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.634195   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:30.634203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:30.634265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:30.666384   74485 cri.go:89] found id: ""
	I1105 19:13:30.666415   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.666425   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:30.666433   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:30.666498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:30.699587   74485 cri.go:89] found id: ""
	I1105 19:13:30.699619   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.699631   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:30.699639   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:30.699699   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:30.731917   74485 cri.go:89] found id: ""
	I1105 19:13:30.731972   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.731983   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:30.731990   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:30.732051   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:30.768807   74485 cri.go:89] found id: ""
	I1105 19:13:30.768832   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.768840   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:30.768849   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:30.768860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:30.848594   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:30.848626   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.889031   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:30.889067   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:30.940550   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:30.940588   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:30.953810   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:30.953845   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:31.023633   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:33.524150   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:33.539025   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:33.539112   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:33.584756   74485 cri.go:89] found id: ""
	I1105 19:13:33.584786   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.584799   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:33.584807   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:33.584869   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:33.624785   74485 cri.go:89] found id: ""
	I1105 19:13:33.624816   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.624829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:33.624836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:33.625025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:33.668750   74485 cri.go:89] found id: ""
	I1105 19:13:33.668783   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.668794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:33.668804   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:33.668867   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:33.701675   74485 cri.go:89] found id: ""
	I1105 19:13:33.701707   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.701735   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:33.701743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:33.701817   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:33.737368   74485 cri.go:89] found id: ""
	I1105 19:13:33.737393   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.737401   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:33.737407   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:33.737458   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:33.770589   74485 cri.go:89] found id: ""
	I1105 19:13:33.770620   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.770630   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:33.770638   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:33.770704   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:33.802635   74485 cri.go:89] found id: ""
	I1105 19:13:33.802668   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.802680   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:33.802687   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:33.802751   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:33.839274   74485 cri.go:89] found id: ""
	I1105 19:13:33.839301   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.839309   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:33.839317   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:33.839328   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:33.881049   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:33.881090   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:33.932704   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:33.932743   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:33.945979   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:33.946007   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:34.017355   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:34.017375   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:34.017390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:36.596284   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:36.608240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:36.608306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:36.641846   74485 cri.go:89] found id: ""
	I1105 19:13:36.641878   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.641887   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:36.641901   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:36.641966   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:36.676553   74485 cri.go:89] found id: ""
	I1105 19:13:36.676584   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.676595   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:36.676602   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:36.676669   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:36.711931   74485 cri.go:89] found id: ""
	I1105 19:13:36.711961   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.711972   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:36.711980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:36.712042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:36.748510   74485 cri.go:89] found id: ""
	I1105 19:13:36.748534   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.748542   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:36.748547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:36.748596   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:36.781869   74485 cri.go:89] found id: ""
	I1105 19:13:36.781899   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.781912   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:36.781922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:36.781983   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:36.816574   74485 cri.go:89] found id: ""
	I1105 19:13:36.816597   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.816605   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:36.816610   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:36.816658   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:36.852894   74485 cri.go:89] found id: ""
	I1105 19:13:36.852921   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.852928   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:36.852934   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:36.852996   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:36.891732   74485 cri.go:89] found id: ""
	I1105 19:13:36.891764   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.891783   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:36.891795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:36.891810   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:36.964948   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:36.964972   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:36.964987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:37.043727   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:37.043765   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:37.084306   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:37.084333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:37.133238   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:37.133274   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:39.647492   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:39.659944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:39.660025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:39.695382   74485 cri.go:89] found id: ""
	I1105 19:13:39.695405   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.695415   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:39.695422   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:39.695480   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:39.731807   74485 cri.go:89] found id: ""
	I1105 19:13:39.731833   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.731841   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:39.731846   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:39.731895   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:39.766913   74485 cri.go:89] found id: ""
	I1105 19:13:39.766945   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.766955   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:39.766963   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:39.767049   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:39.800265   74485 cri.go:89] found id: ""
	I1105 19:13:39.800288   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.800296   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:39.800301   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:39.800346   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:39.832753   74485 cri.go:89] found id: ""
	I1105 19:13:39.832781   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.832789   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:39.832794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:39.832843   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:39.865950   74485 cri.go:89] found id: ""
	I1105 19:13:39.865980   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.865990   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:39.865997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:39.866046   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:39.902918   74485 cri.go:89] found id: ""
	I1105 19:13:39.902948   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.902957   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:39.902962   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:39.903039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:39.935086   74485 cri.go:89] found id: ""
	I1105 19:13:39.935117   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.935129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:39.935139   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:39.935152   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:39.997935   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:39.997961   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:39.997976   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:40.076794   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:40.076852   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:40.114178   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:40.114209   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:40.163512   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:40.163550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:42.676843   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:42.689855   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:42.689930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:42.724108   74485 cri.go:89] found id: ""
	I1105 19:13:42.724139   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.724148   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:42.724156   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:42.724218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:42.760816   74485 cri.go:89] found id: ""
	I1105 19:13:42.760844   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.760854   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:42.760861   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:42.760924   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:42.795111   74485 cri.go:89] found id: ""
	I1105 19:13:42.795134   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.795142   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:42.795147   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:42.795195   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:42.832964   74485 cri.go:89] found id: ""
	I1105 19:13:42.832988   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.832997   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:42.833003   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:42.833065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:42.868817   74485 cri.go:89] found id: ""
	I1105 19:13:42.868848   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.868858   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:42.868865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:42.868933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:42.902015   74485 cri.go:89] found id: ""
	I1105 19:13:42.902044   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.902051   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:42.902056   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:42.902146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:42.934298   74485 cri.go:89] found id: ""
	I1105 19:13:42.934322   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.934330   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:42.934335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:42.934385   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:42.969804   74485 cri.go:89] found id: ""
	I1105 19:13:42.969831   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.969843   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:42.969854   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:42.969873   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:43.019922   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:43.019959   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:43.033594   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:43.033622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:43.108220   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:43.108240   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:43.108251   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:43.191946   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:43.191987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:45.730728   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:45.743344   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:45.743419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:45.777693   74485 cri.go:89] found id: ""
	I1105 19:13:45.777728   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.777739   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:45.777747   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:45.777810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:45.810195   74485 cri.go:89] found id: ""
	I1105 19:13:45.810222   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.810233   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:45.810240   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:45.810308   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:45.851210   74485 cri.go:89] found id: ""
	I1105 19:13:45.851240   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.851247   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:45.851252   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:45.851311   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:45.885501   74485 cri.go:89] found id: ""
	I1105 19:13:45.885531   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.885540   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:45.885546   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:45.885595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:45.921638   74485 cri.go:89] found id: ""
	I1105 19:13:45.921667   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.921676   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:45.921684   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:45.921745   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:45.954341   74485 cri.go:89] found id: ""
	I1105 19:13:45.954373   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.954384   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:45.954394   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:45.954461   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:45.988840   74485 cri.go:89] found id: ""
	I1105 19:13:45.988865   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.988873   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:45.988879   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:45.988949   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:46.025409   74485 cri.go:89] found id: ""
	I1105 19:13:46.025441   74485 logs.go:282] 0 containers: []
	W1105 19:13:46.025458   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:46.025470   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:46.025486   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:46.037763   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:46.037787   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:46.112619   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:46.112663   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:46.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:46.192165   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:46.192199   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:46.233235   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:46.233263   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:48.787685   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:48.800681   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:48.800749   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:48.835344   74485 cri.go:89] found id: ""
	I1105 19:13:48.835366   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.835374   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:48.835383   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:48.835429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:48.867447   74485 cri.go:89] found id: ""
	I1105 19:13:48.867474   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.867483   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:48.867488   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:48.867536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:48.899135   74485 cri.go:89] found id: ""
	I1105 19:13:48.899160   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.899167   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:48.899172   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:48.899221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:48.932208   74485 cri.go:89] found id: ""
	I1105 19:13:48.932243   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.932255   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:48.932263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:48.932326   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:48.967174   74485 cri.go:89] found id: ""
	I1105 19:13:48.967202   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.967210   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:48.967215   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:48.967267   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:48.998902   74485 cri.go:89] found id: ""
	I1105 19:13:48.998932   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.998942   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:48.998950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:48.999030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:49.030946   74485 cri.go:89] found id: ""
	I1105 19:13:49.030988   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.030999   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:49.031006   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:49.031074   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:49.063489   74485 cri.go:89] found id: ""
	I1105 19:13:49.063517   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.063528   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:49.063540   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:49.063555   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:49.116433   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:49.116477   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:49.131439   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:49.131476   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:49.199770   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:49.199795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:49.199809   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:49.275503   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:49.275543   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:51.816208   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:51.829328   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:51.829399   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:51.863320   74485 cri.go:89] found id: ""
	I1105 19:13:51.863346   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.863354   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:51.863359   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:51.863406   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:51.896589   74485 cri.go:89] found id: ""
	I1105 19:13:51.896618   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.896628   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:51.896635   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:51.896697   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:51.933744   74485 cri.go:89] found id: ""
	I1105 19:13:51.933769   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.933776   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:51.933781   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:51.933829   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:51.970806   74485 cri.go:89] found id: ""
	I1105 19:13:51.970829   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.970836   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:51.970842   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:51.970889   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:52.004087   74485 cri.go:89] found id: ""
	I1105 19:13:52.004116   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.004124   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:52.004129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:52.004186   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:52.041721   74485 cri.go:89] found id: ""
	I1105 19:13:52.041752   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.041763   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:52.041771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:52.041835   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:52.079253   74485 cri.go:89] found id: ""
	I1105 19:13:52.079277   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.079285   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:52.079292   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:52.079351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:52.112604   74485 cri.go:89] found id: ""
	I1105 19:13:52.112642   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.112653   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:52.112664   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:52.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:52.160799   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:52.160841   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:52.174323   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:52.174355   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:52.247358   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:52.247383   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:52.247395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:52.326071   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:52.326108   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:54.866454   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:54.879015   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:54.879093   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:54.911729   74485 cri.go:89] found id: ""
	I1105 19:13:54.911765   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.911777   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:54.911785   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:54.911846   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:54.943137   74485 cri.go:89] found id: ""
	I1105 19:13:54.943169   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.943185   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:54.943193   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:54.943253   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:54.977951   74485 cri.go:89] found id: ""
	I1105 19:13:54.977980   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.977991   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:54.977998   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:54.978061   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:55.009453   74485 cri.go:89] found id: ""
	I1105 19:13:55.009478   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.009486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:55.009491   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:55.009537   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:55.040790   74485 cri.go:89] found id: ""
	I1105 19:13:55.040814   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.040821   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:55.040827   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:55.040878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:55.073401   74485 cri.go:89] found id: ""
	I1105 19:13:55.073430   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.073441   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:55.073449   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:55.073508   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:55.105419   74485 cri.go:89] found id: ""
	I1105 19:13:55.105443   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.105451   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:55.105456   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:55.105511   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:55.137363   74485 cri.go:89] found id: ""
	I1105 19:13:55.137395   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.137406   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:55.137416   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:55.137431   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:55.174176   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:55.174201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:55.221658   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:55.221693   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:55.235044   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:55.235070   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:55.308192   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:55.308218   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:55.308234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:57.892462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:57.905472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:57.905543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:57.946044   74485 cri.go:89] found id: ""
	I1105 19:13:57.946071   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.946081   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:57.946089   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:57.946149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:57.980762   74485 cri.go:89] found id: ""
	I1105 19:13:57.980791   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.980803   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:57.980811   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:57.980874   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:58.013351   74485 cri.go:89] found id: ""
	I1105 19:13:58.013374   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.013381   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:58.013386   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:58.013433   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:58.049056   74485 cri.go:89] found id: ""
	I1105 19:13:58.049083   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.049091   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:58.049097   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:58.049147   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:58.081476   74485 cri.go:89] found id: ""
	I1105 19:13:58.081507   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.081517   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:58.081524   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:58.081583   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:58.114526   74485 cri.go:89] found id: ""
	I1105 19:13:58.114554   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.114564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:58.114571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:58.114630   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:58.148219   74485 cri.go:89] found id: ""
	I1105 19:13:58.148243   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.148252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:58.148257   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:58.148312   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:58.183254   74485 cri.go:89] found id: ""
	I1105 19:13:58.183277   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.183285   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:58.183292   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:58.183304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:58.234747   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:58.234785   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:58.248269   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:58.248300   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:58.313290   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:58.313312   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:58.313327   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:58.389847   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:58.389889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:00.927957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:00.941525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:00.941593   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:00.974891   74485 cri.go:89] found id: ""
	I1105 19:14:00.974920   74485 logs.go:282] 0 containers: []
	W1105 19:14:00.974931   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:00.974938   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:00.975018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:01.008224   74485 cri.go:89] found id: ""
	I1105 19:14:01.008250   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.008262   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:01.008270   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:01.008328   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:01.044514   74485 cri.go:89] found id: ""
	I1105 19:14:01.044545   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.044553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:01.044559   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:01.044614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:01.077091   74485 cri.go:89] found id: ""
	I1105 19:14:01.077124   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.077135   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:01.077141   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:01.077197   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:01.109947   74485 cri.go:89] found id: ""
	I1105 19:14:01.109976   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.109986   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:01.109994   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:01.110054   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:01.146162   74485 cri.go:89] found id: ""
	I1105 19:14:01.146193   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.146203   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:01.146211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:01.146275   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:01.180335   74485 cri.go:89] found id: ""
	I1105 19:14:01.180360   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.180370   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:01.180377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:01.180436   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:01.216160   74485 cri.go:89] found id: ""
	I1105 19:14:01.216189   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.216199   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:01.216221   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:01.216236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:01.229426   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:01.229455   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:01.298847   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:01.298874   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:01.298889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:01.375255   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:01.375299   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:01.417946   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:01.418026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:03.973713   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:03.987128   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:03.987198   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:04.020050   74485 cri.go:89] found id: ""
	I1105 19:14:04.020081   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.020091   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:04.020098   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:04.020164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:04.053458   74485 cri.go:89] found id: ""
	I1105 19:14:04.053485   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.053492   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:04.053498   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:04.053544   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:04.086417   74485 cri.go:89] found id: ""
	I1105 19:14:04.086442   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.086455   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:04.086461   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:04.086513   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:04.122035   74485 cri.go:89] found id: ""
	I1105 19:14:04.122059   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.122067   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:04.122073   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:04.122120   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:04.158732   74485 cri.go:89] found id: ""
	I1105 19:14:04.158758   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.158765   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:04.158771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:04.158822   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:04.190497   74485 cri.go:89] found id: ""
	I1105 19:14:04.190525   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.190536   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:04.190543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:04.190604   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:04.222040   74485 cri.go:89] found id: ""
	I1105 19:14:04.222066   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.222074   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:04.222079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:04.222131   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:04.258753   74485 cri.go:89] found id: ""
	I1105 19:14:04.258781   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.258793   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:04.258804   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:04.258819   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:04.299966   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:04.300052   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:04.355364   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:04.355395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:04.368954   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:04.368980   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:04.431658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:04.431688   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:04.431700   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.015289   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:07.029580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:07.029644   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:07.066931   74485 cri.go:89] found id: ""
	I1105 19:14:07.066964   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.066993   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:07.067004   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:07.067059   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:07.104315   74485 cri.go:89] found id: ""
	I1105 19:14:07.104341   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.104349   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:07.104354   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:07.104401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:07.141271   74485 cri.go:89] found id: ""
	I1105 19:14:07.141298   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.141305   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:07.141311   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:07.141360   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:07.174600   74485 cri.go:89] found id: ""
	I1105 19:14:07.174631   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.174643   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:07.174653   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:07.174707   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:07.211920   74485 cri.go:89] found id: ""
	I1105 19:14:07.211958   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.211969   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:07.211975   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:07.212027   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:07.248238   74485 cri.go:89] found id: ""
	I1105 19:14:07.248269   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.248280   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:07.248286   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:07.248344   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:07.279833   74485 cri.go:89] found id: ""
	I1105 19:14:07.279864   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.279874   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:07.279881   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:07.279931   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:07.317411   74485 cri.go:89] found id: ""
	I1105 19:14:07.317441   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.317452   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:07.317461   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:07.317474   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:07.390499   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:07.390535   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:07.390556   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.488858   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:07.488895   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:07.528612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:07.528645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:07.581884   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:07.581927   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:10.096089   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:10.110828   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:10.110898   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:10.147299   74485 cri.go:89] found id: ""
	I1105 19:14:10.147332   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.147344   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:10.147350   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:10.147401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:10.181457   74485 cri.go:89] found id: ""
	I1105 19:14:10.181482   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.181489   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:10.181495   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:10.181540   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:10.215210   74485 cri.go:89] found id: ""
	I1105 19:14:10.215241   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.215252   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:10.215259   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:10.215319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:10.249587   74485 cri.go:89] found id: ""
	I1105 19:14:10.249609   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.249617   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:10.249625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:10.249679   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:10.282566   74485 cri.go:89] found id: ""
	I1105 19:14:10.282591   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.282598   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:10.282604   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:10.282672   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:10.314312   74485 cri.go:89] found id: ""
	I1105 19:14:10.314344   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.314355   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:10.314361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:10.314415   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:10.346988   74485 cri.go:89] found id: ""
	I1105 19:14:10.347016   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.347028   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:10.347035   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:10.347088   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:10.381326   74485 cri.go:89] found id: ""
	I1105 19:14:10.381354   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.381370   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:10.381380   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:10.381394   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:10.418311   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:10.418344   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:10.469559   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:10.469590   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:10.482394   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:10.482427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:10.551831   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:10.551854   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:10.551870   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:13.127576   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:13.143182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:13.143242   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:13.188794   74485 cri.go:89] found id: ""
	I1105 19:14:13.188827   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.188839   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:13.188846   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:13.188897   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:13.221790   74485 cri.go:89] found id: ""
	I1105 19:14:13.221818   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.221829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:13.221836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:13.221893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:13.255164   74485 cri.go:89] found id: ""
	I1105 19:14:13.255194   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.255205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:13.255212   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:13.255272   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:13.288203   74485 cri.go:89] found id: ""
	I1105 19:14:13.288231   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.288241   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:13.288249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:13.288307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:13.321438   74485 cri.go:89] found id: ""
	I1105 19:14:13.321463   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.321475   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:13.321482   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:13.321541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:13.361858   74485 cri.go:89] found id: ""
	I1105 19:14:13.361886   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.361897   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:13.361905   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:13.361979   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:13.394210   74485 cri.go:89] found id: ""
	I1105 19:14:13.394239   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.394252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:13.394260   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:13.394324   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:13.434665   74485 cri.go:89] found id: ""
	I1105 19:14:13.434697   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.434705   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:13.434712   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:13.434724   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:13.447849   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:13.447875   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:13.514353   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:13.514377   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:13.514390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:13.590746   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:13.590784   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:13.627704   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:13.627732   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:16.180171   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:16.193282   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:16.193342   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:16.230087   74485 cri.go:89] found id: ""
	I1105 19:14:16.230118   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.230128   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:16.230137   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:16.230200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:16.264315   74485 cri.go:89] found id: ""
	I1105 19:14:16.264348   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.264360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:16.264368   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:16.264429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:16.298197   74485 cri.go:89] found id: ""
	I1105 19:14:16.298231   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.298243   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:16.298251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:16.298316   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:16.333149   74485 cri.go:89] found id: ""
	I1105 19:14:16.333180   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.333193   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:16.333203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:16.333268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:16.366863   74485 cri.go:89] found id: ""
	I1105 19:14:16.366887   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.366895   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:16.366900   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:16.366947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:16.400434   74485 cri.go:89] found id: ""
	I1105 19:14:16.400458   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.400466   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:16.400472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:16.400524   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:16.435475   74485 cri.go:89] found id: ""
	I1105 19:14:16.435497   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.435504   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:16.435510   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:16.435560   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:16.470577   74485 cri.go:89] found id: ""
	I1105 19:14:16.470604   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.470612   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:16.470620   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:16.470632   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:16.483061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:16.483094   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:16.550662   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:16.550690   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:16.550702   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:16.629372   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:16.629411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:16.669488   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:16.669526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:19.219244   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:19.232682   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:19.232744   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:19.264594   74485 cri.go:89] found id: ""
	I1105 19:14:19.264624   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.264635   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:19.264649   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:19.264708   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:19.301434   74485 cri.go:89] found id: ""
	I1105 19:14:19.301468   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.301479   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:19.301487   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:19.301558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:19.333465   74485 cri.go:89] found id: ""
	I1105 19:14:19.333494   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.333502   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:19.333508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:19.333558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:19.365865   74485 cri.go:89] found id: ""
	I1105 19:14:19.365892   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.365900   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:19.365906   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:19.365958   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:19.406533   74485 cri.go:89] found id: ""
	I1105 19:14:19.406563   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.406575   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:19.406583   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:19.406639   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:19.439351   74485 cri.go:89] found id: ""
	I1105 19:14:19.439377   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.439386   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:19.439392   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:19.439438   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:19.475033   74485 cri.go:89] found id: ""
	I1105 19:14:19.475058   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.475065   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:19.475070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:19.475119   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:19.508638   74485 cri.go:89] found id: ""
	I1105 19:14:19.508662   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.508670   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:19.508678   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:19.508689   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:19.588268   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:19.588293   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:19.588304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:19.671382   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:19.671415   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:19.716497   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:19.716526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:19.769686   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:19.769722   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.283476   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:22.296393   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:22.296456   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:22.331226   74485 cri.go:89] found id: ""
	I1105 19:14:22.331247   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.331255   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:22.331261   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:22.331306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:22.363466   74485 cri.go:89] found id: ""
	I1105 19:14:22.363499   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.363510   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:22.363518   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:22.363586   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:22.397025   74485 cri.go:89] found id: ""
	I1105 19:14:22.397052   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.397061   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:22.397066   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:22.397116   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:22.429450   74485 cri.go:89] found id: ""
	I1105 19:14:22.429476   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.429486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:22.429493   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:22.429554   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:22.461615   74485 cri.go:89] found id: ""
	I1105 19:14:22.461643   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.461654   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:22.461660   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:22.461728   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:22.492470   74485 cri.go:89] found id: ""
	I1105 19:14:22.492502   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.492513   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:22.492521   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:22.492587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:22.525335   74485 cri.go:89] found id: ""
	I1105 19:14:22.525358   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.525366   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:22.525372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:22.525423   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:22.558854   74485 cri.go:89] found id: ""
	I1105 19:14:22.558881   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.558890   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:22.558901   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:22.558916   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:22.608638   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:22.608674   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.621769   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:22.621800   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:22.688971   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:22.688998   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:22.689012   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:22.770517   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:22.770558   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:25.315778   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:25.335372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:25.335444   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:25.383988   74485 cri.go:89] found id: ""
	I1105 19:14:25.384019   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.384029   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:25.384036   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:25.384096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:25.432070   74485 cri.go:89] found id: ""
	I1105 19:14:25.432103   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.432115   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:25.432122   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:25.432184   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:25.464859   74485 cri.go:89] found id: ""
	I1105 19:14:25.464891   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.464902   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:25.464909   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:25.464976   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:25.498684   74485 cri.go:89] found id: ""
	I1105 19:14:25.498712   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.498719   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:25.498724   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:25.498777   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:25.532998   74485 cri.go:89] found id: ""
	I1105 19:14:25.533023   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.533032   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:25.533039   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:25.533084   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:25.568101   74485 cri.go:89] found id: ""
	I1105 19:14:25.568130   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.568138   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:25.568144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:25.568208   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:25.600470   74485 cri.go:89] found id: ""
	I1105 19:14:25.600495   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.600503   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:25.600509   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:25.600564   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:25.631792   74485 cri.go:89] found id: ""
	I1105 19:14:25.631824   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.631834   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:25.631845   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:25.631860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:25.683820   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:25.683856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:25.698066   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:25.698095   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:25.764838   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:25.764869   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:25.764886   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:25.838791   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:25.838828   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:28.376183   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:28.389686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:28.389760   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:28.424180   74485 cri.go:89] found id: ""
	I1105 19:14:28.424209   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.424221   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:28.424229   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:28.424289   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:28.462742   74485 cri.go:89] found id: ""
	I1105 19:14:28.462765   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.462777   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:28.462784   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:28.462839   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:28.494550   74485 cri.go:89] found id: ""
	I1105 19:14:28.494574   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.494581   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:28.494588   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:28.494667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:28.525606   74485 cri.go:89] found id: ""
	I1105 19:14:28.525632   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.525639   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:28.525645   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:28.525696   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:28.558599   74485 cri.go:89] found id: ""
	I1105 19:14:28.558628   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.558638   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:28.558644   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:28.558701   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:28.590496   74485 cri.go:89] found id: ""
	I1105 19:14:28.590522   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.590530   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:28.590535   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:28.590599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:28.622748   74485 cri.go:89] found id: ""
	I1105 19:14:28.622772   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.622780   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:28.622786   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:28.622836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:28.656452   74485 cri.go:89] found id: ""
	I1105 19:14:28.656477   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.656485   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:28.656493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:28.656504   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.736458   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:28.736505   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:28.771923   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:28.771954   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:28.821099   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:28.821133   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:28.834698   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:28.834726   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:28.900543   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.400733   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:31.414573   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:31.414647   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:31.452244   74485 cri.go:89] found id: ""
	I1105 19:14:31.452275   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.452286   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:31.452293   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:31.452353   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:31.485898   74485 cri.go:89] found id: ""
	I1105 19:14:31.485920   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.485935   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:31.485940   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:31.486009   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:31.522826   74485 cri.go:89] found id: ""
	I1105 19:14:31.522850   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.522858   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:31.522865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:31.522925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:31.560096   74485 cri.go:89] found id: ""
	I1105 19:14:31.560136   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.560164   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:31.560174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:31.560234   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:31.596698   74485 cri.go:89] found id: ""
	I1105 19:14:31.596725   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.596733   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:31.596738   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:31.596792   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:31.635109   74485 cri.go:89] found id: ""
	I1105 19:14:31.635138   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.635148   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:31.635156   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:31.635221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:31.667612   74485 cri.go:89] found id: ""
	I1105 19:14:31.667639   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.667651   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:31.667658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:31.667726   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:31.699815   74485 cri.go:89] found id: ""
	I1105 19:14:31.699844   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.699854   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:31.699864   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:31.699879   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:31.737165   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:31.737196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:31.788513   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:31.788550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:31.801580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:31.801609   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:31.871658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.871683   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:31.871696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:34.450954   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:34.466129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:34.466204   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:34.499984   74485 cri.go:89] found id: ""
	I1105 19:14:34.500009   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.500020   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:34.500027   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:34.500091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:34.532923   74485 cri.go:89] found id: ""
	I1105 19:14:34.532950   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.532958   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:34.532969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:34.533017   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:34.566772   74485 cri.go:89] found id: ""
	I1105 19:14:34.566803   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.566811   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:34.566817   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:34.566872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:34.607398   74485 cri.go:89] found id: ""
	I1105 19:14:34.607422   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.607430   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:34.607435   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:34.607497   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:34.640091   74485 cri.go:89] found id: ""
	I1105 19:14:34.640123   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.640135   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:34.640143   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:34.640207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:34.677164   74485 cri.go:89] found id: ""
	I1105 19:14:34.677201   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.677211   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:34.677217   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:34.677266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:34.714900   74485 cri.go:89] found id: ""
	I1105 19:14:34.714931   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.714942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:34.714949   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:34.715023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:34.751003   74485 cri.go:89] found id: ""
	I1105 19:14:34.751032   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.751040   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:34.751048   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:34.751059   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:34.822279   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:34.822301   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:34.822315   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:34.898607   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:34.898640   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:34.934727   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:34.934754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:34.985935   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:34.985969   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.500117   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:37.512467   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:37.512541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:37.544914   74485 cri.go:89] found id: ""
	I1105 19:14:37.544941   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.544952   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:37.544959   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:37.545028   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:37.581507   74485 cri.go:89] found id: ""
	I1105 19:14:37.581535   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.581545   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:37.581553   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:37.581612   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:37.615546   74485 cri.go:89] found id: ""
	I1105 19:14:37.615576   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.615585   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:37.615592   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:37.615667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:37.648239   74485 cri.go:89] found id: ""
	I1105 19:14:37.648267   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.648276   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:37.648283   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:37.648343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:37.682861   74485 cri.go:89] found id: ""
	I1105 19:14:37.682891   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.682898   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:37.682904   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:37.682952   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:37.715506   74485 cri.go:89] found id: ""
	I1105 19:14:37.715532   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.715540   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:37.715547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:37.715597   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:37.747973   74485 cri.go:89] found id: ""
	I1105 19:14:37.748003   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.748014   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:37.748022   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:37.748083   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:37.780270   74485 cri.go:89] found id: ""
	I1105 19:14:37.780294   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.780302   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:37.780310   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:37.780321   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.793885   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:37.793914   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:37.860114   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:37.860140   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:37.860154   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:37.941221   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:37.941255   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.980537   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:37.980567   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.532301   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:40.545540   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:40.545599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:40.578642   74485 cri.go:89] found id: ""
	I1105 19:14:40.578687   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.578699   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:40.578707   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:40.578772   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:40.612049   74485 cri.go:89] found id: ""
	I1105 19:14:40.612078   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.612089   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:40.612097   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:40.612159   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:40.644495   74485 cri.go:89] found id: ""
	I1105 19:14:40.644519   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.644527   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:40.644532   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:40.644587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:40.676890   74485 cri.go:89] found id: ""
	I1105 19:14:40.676923   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.676931   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:40.676937   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:40.676984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:40.710095   74485 cri.go:89] found id: ""
	I1105 19:14:40.710125   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.710136   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:40.710144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:40.710200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:40.748323   74485 cri.go:89] found id: ""
	I1105 19:14:40.748353   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.748364   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:40.748372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:40.748501   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:40.781578   74485 cri.go:89] found id: ""
	I1105 19:14:40.781606   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.781618   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:40.781626   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:40.781689   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:40.816010   74485 cri.go:89] found id: ""
	I1105 19:14:40.816048   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.816060   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:40.816071   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:40.816086   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.869836   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:40.869876   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:40.883436   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:40.883471   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:40.946538   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:40.946566   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:40.946585   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:41.023085   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:41.023123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:43.566841   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:43.579425   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:43.579498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:43.620500   74485 cri.go:89] found id: ""
	I1105 19:14:43.620526   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.620535   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:43.620541   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:43.620600   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:43.652992   74485 cri.go:89] found id: ""
	I1105 19:14:43.653024   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.653035   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:43.653042   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:43.653105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:43.686945   74485 cri.go:89] found id: ""
	I1105 19:14:43.686991   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.687003   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:43.687010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:43.687124   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:43.720075   74485 cri.go:89] found id: ""
	I1105 19:14:43.720103   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.720114   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:43.720121   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:43.720179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:43.757969   74485 cri.go:89] found id: ""
	I1105 19:14:43.757997   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.758005   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:43.758011   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:43.758071   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:43.790068   74485 cri.go:89] found id: ""
	I1105 19:14:43.790094   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.790103   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:43.790109   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:43.790153   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:43.821696   74485 cri.go:89] found id: ""
	I1105 19:14:43.821722   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.821733   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:43.821741   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:43.821803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:43.855976   74485 cri.go:89] found id: ""
	I1105 19:14:43.856003   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.856011   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:43.856019   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:43.856029   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:43.934375   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:43.934409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:43.972567   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:43.972597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:44.025660   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:44.025696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:44.039229   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:44.039258   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:44.112179   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:46.612815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:46.626070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:46.626145   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:46.659184   74485 cri.go:89] found id: ""
	I1105 19:14:46.659210   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.659218   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:46.659227   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:46.659288   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:46.691887   74485 cri.go:89] found id: ""
	I1105 19:14:46.691917   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.691928   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:46.691934   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:46.692003   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:46.725745   74485 cri.go:89] found id: ""
	I1105 19:14:46.725776   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.725787   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:46.725795   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:46.725847   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:46.761733   74485 cri.go:89] found id: ""
	I1105 19:14:46.761762   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.761773   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:46.761780   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:46.761842   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:46.792926   74485 cri.go:89] found id: ""
	I1105 19:14:46.792955   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.792966   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:46.792974   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:46.793036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:46.824462   74485 cri.go:89] found id: ""
	I1105 19:14:46.824503   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.824512   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:46.824519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:46.824580   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:46.865057   74485 cri.go:89] found id: ""
	I1105 19:14:46.865082   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.865090   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:46.865095   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:46.865146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:46.901357   74485 cri.go:89] found id: ""
	I1105 19:14:46.901385   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.901393   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:46.901401   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:46.901414   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:46.951986   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:46.952021   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:46.966035   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:46.966065   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:47.035163   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:47.035184   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:47.035196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:47.115825   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:47.115860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:49.658737   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:49.672088   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:49.672182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:49.708638   74485 cri.go:89] found id: ""
	I1105 19:14:49.708666   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.708674   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:49.708679   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:49.708736   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:49.744485   74485 cri.go:89] found id: ""
	I1105 19:14:49.744513   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.744521   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:49.744526   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:49.744572   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:49.779758   74485 cri.go:89] found id: ""
	I1105 19:14:49.779785   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.779794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:49.779800   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:49.779858   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:49.814216   74485 cri.go:89] found id: ""
	I1105 19:14:49.814248   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.814256   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:49.814262   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:49.814310   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:49.851348   74485 cri.go:89] found id: ""
	I1105 19:14:49.851377   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.851389   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:49.851396   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:49.851455   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:49.883866   74485 cri.go:89] found id: ""
	I1105 19:14:49.883897   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.883906   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:49.883912   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:49.883959   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:49.916944   74485 cri.go:89] found id: ""
	I1105 19:14:49.916967   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.916975   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:49.916980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:49.917039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:49.950405   74485 cri.go:89] found id: ""
	I1105 19:14:49.950437   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.950449   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:49.950459   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:49.950475   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:49.996064   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:49.996102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:50.044865   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:50.044902   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:50.058206   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:50.058236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:50.130371   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:50.130397   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:50.130412   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:52.706441   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:52.719571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:52.719655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:52.753850   74485 cri.go:89] found id: ""
	I1105 19:14:52.753880   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.753891   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:52.753899   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:52.753961   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:52.794112   74485 cri.go:89] found id: ""
	I1105 19:14:52.794139   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.794149   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:52.794156   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:52.794218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:52.830151   74485 cri.go:89] found id: ""
	I1105 19:14:52.830178   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.830188   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:52.830195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:52.830258   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:52.864803   74485 cri.go:89] found id: ""
	I1105 19:14:52.864832   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.864853   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:52.864868   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:52.864930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:52.897237   74485 cri.go:89] found id: ""
	I1105 19:14:52.897271   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.897282   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:52.897289   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:52.897351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:52.932236   74485 cri.go:89] found id: ""
	I1105 19:14:52.932262   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.932270   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:52.932275   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:52.932319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:52.965781   74485 cri.go:89] found id: ""
	I1105 19:14:52.965808   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.965817   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:52.965825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:52.965918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:52.999098   74485 cri.go:89] found id: ""
	I1105 19:14:52.999121   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.999129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:52.999137   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:52.999146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:53.051085   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:53.051127   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:53.064690   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:53.064717   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:53.128334   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:53.128358   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:53.128372   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:53.207751   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:53.207791   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:55.745430   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:55.758734   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:55.758821   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:55.791827   74485 cri.go:89] found id: ""
	I1105 19:14:55.791854   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.791862   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:55.791868   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:55.791922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:55.824191   74485 cri.go:89] found id: ""
	I1105 19:14:55.824217   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.824224   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:55.824230   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:55.824278   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:55.858579   74485 cri.go:89] found id: ""
	I1105 19:14:55.858611   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.858619   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:55.858625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:55.858673   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:55.891579   74485 cri.go:89] found id: ""
	I1105 19:14:55.891604   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.891612   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:55.891617   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:55.891663   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:55.924881   74485 cri.go:89] found id: ""
	I1105 19:14:55.924910   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.924920   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:55.924930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:55.924999   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:55.956634   74485 cri.go:89] found id: ""
	I1105 19:14:55.956663   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.956678   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:55.956686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:55.956742   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:55.988770   74485 cri.go:89] found id: ""
	I1105 19:14:55.988803   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.988814   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:55.988821   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:55.988880   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:56.022236   74485 cri.go:89] found id: ""
	I1105 19:14:56.022257   74485 logs.go:282] 0 containers: []
	W1105 19:14:56.022266   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:56.022273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:56.022284   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:56.073035   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:56.073071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:56.086899   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:56.086923   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:56.158219   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:56.158247   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:56.158259   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:56.246621   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:56.246660   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:58.791443   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:58.804398   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:58.804476   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:58.837812   74485 cri.go:89] found id: ""
	I1105 19:14:58.837840   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.837856   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:58.837863   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:58.837926   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:58.870154   74485 cri.go:89] found id: ""
	I1105 19:14:58.870186   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.870197   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:58.870204   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:58.870268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:58.906518   74485 cri.go:89] found id: ""
	I1105 19:14:58.906545   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.906553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:58.906563   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:58.906614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:58.939320   74485 cri.go:89] found id: ""
	I1105 19:14:58.939346   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.939357   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:58.939364   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:58.939426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:58.974116   74485 cri.go:89] found id: ""
	I1105 19:14:58.974143   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.974153   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:58.974160   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:58.974221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:59.006820   74485 cri.go:89] found id: ""
	I1105 19:14:59.006854   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.006866   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:59.006873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:59.006933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:59.039691   74485 cri.go:89] found id: ""
	I1105 19:14:59.039723   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.039735   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:59.039742   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:59.039800   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:59.071829   74485 cri.go:89] found id: ""
	I1105 19:14:59.071860   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.071881   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:59.071893   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:59.071906   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:59.124158   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:59.124195   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:59.138563   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:59.138594   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:59.216148   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:59.216174   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:59.216189   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:59.295262   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:59.295297   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:01.833789   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:01.847332   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:01.847408   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:01.882721   74485 cri.go:89] found id: ""
	I1105 19:15:01.882743   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.882750   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:01.882755   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:01.882811   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:01.916457   74485 cri.go:89] found id: ""
	I1105 19:15:01.916479   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.916487   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:01.916502   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:01.916557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:01.950521   74485 cri.go:89] found id: ""
	I1105 19:15:01.950552   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.950564   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:01.950571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:01.950624   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:01.985823   74485 cri.go:89] found id: ""
	I1105 19:15:01.985852   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.985862   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:01.985870   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:01.985918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:02.021689   74485 cri.go:89] found id: ""
	I1105 19:15:02.021720   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.021731   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:02.021739   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:02.021804   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:02.058632   74485 cri.go:89] found id: ""
	I1105 19:15:02.058658   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.058666   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:02.058672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:02.058738   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:02.097916   74485 cri.go:89] found id: ""
	I1105 19:15:02.097947   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.097956   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:02.097961   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:02.098010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:02.131992   74485 cri.go:89] found id: ""
	I1105 19:15:02.132027   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.132038   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:02.132050   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:02.132066   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:02.188605   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:02.188645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:02.201873   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:02.201904   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:02.274767   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:02.274795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:02.274811   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:02.358520   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:02.358559   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:04.897693   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:04.913131   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:04.913207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:04.952546   74485 cri.go:89] found id: ""
	I1105 19:15:04.952571   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.952579   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:04.952584   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:04.952643   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:04.987334   74485 cri.go:89] found id: ""
	I1105 19:15:04.987360   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.987368   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:04.987374   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:04.987434   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:05.021873   74485 cri.go:89] found id: ""
	I1105 19:15:05.021906   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.021919   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:05.021926   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:05.021985   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:05.056169   74485 cri.go:89] found id: ""
	I1105 19:15:05.056199   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.056208   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:05.056213   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:05.056265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:05.093090   74485 cri.go:89] found id: ""
	I1105 19:15:05.093117   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.093125   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:05.093130   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:05.093182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:05.127533   74485 cri.go:89] found id: ""
	I1105 19:15:05.127557   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.127564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:05.127576   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:05.127625   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:05.165127   74485 cri.go:89] found id: ""
	I1105 19:15:05.165162   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.165173   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:05.165180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:05.165243   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:05.200526   74485 cri.go:89] found id: ""
	I1105 19:15:05.200556   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.200567   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:05.200578   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:05.200593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:05.247497   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:05.247535   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:05.261963   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:05.261996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:05.336813   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:05.336833   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:05.336844   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:05.412278   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:05.412320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:07.951085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:07.966125   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:07.966203   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:08.004253   74485 cri.go:89] found id: ""
	I1105 19:15:08.004291   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.004302   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:08.004310   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:08.004373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:08.039539   74485 cri.go:89] found id: ""
	I1105 19:15:08.039562   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.039569   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:08.039575   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:08.039629   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:08.076043   74485 cri.go:89] found id: ""
	I1105 19:15:08.076080   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.076093   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:08.076101   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:08.076157   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:08.110489   74485 cri.go:89] found id: ""
	I1105 19:15:08.110512   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.110519   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:08.110525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:08.110589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:08.147532   74485 cri.go:89] found id: ""
	I1105 19:15:08.147564   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.147574   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:08.147580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:08.147628   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:08.182225   74485 cri.go:89] found id: ""
	I1105 19:15:08.182248   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.182256   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:08.182263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:08.182322   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:08.223488   74485 cri.go:89] found id: ""
	I1105 19:15:08.223524   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.223536   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:08.223544   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:08.223610   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:08.266524   74485 cri.go:89] found id: ""
	I1105 19:15:08.266559   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.266571   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:08.266582   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:08.266597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:08.279036   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:08.279061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:08.346030   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:08.346052   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:08.346064   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:08.428081   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:08.428118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:08.464760   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:08.464789   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.016193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:11.030598   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:11.030681   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:11.066035   74485 cri.go:89] found id: ""
	I1105 19:15:11.066064   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.066073   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:11.066078   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:11.066133   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:11.103906   74485 cri.go:89] found id: ""
	I1105 19:15:11.103937   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.103948   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:11.103955   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:11.104023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:11.142936   74485 cri.go:89] found id: ""
	I1105 19:15:11.143024   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.143034   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:11.143041   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:11.143091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:11.180041   74485 cri.go:89] found id: ""
	I1105 19:15:11.180074   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.180086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:11.180094   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:11.180158   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:11.215661   74485 cri.go:89] found id: ""
	I1105 19:15:11.215693   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.215701   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:11.215707   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:11.215758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:11.252603   74485 cri.go:89] found id: ""
	I1105 19:15:11.252651   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.252663   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:11.252672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:11.252739   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:11.299295   74485 cri.go:89] found id: ""
	I1105 19:15:11.299328   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.299340   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:11.299347   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:11.299402   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:11.355153   74485 cri.go:89] found id: ""
	I1105 19:15:11.355177   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.355185   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:11.355193   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:11.355206   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:11.441076   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:11.441118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:11.480367   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:11.480396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.534646   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:11.534683   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:11.548141   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:11.548170   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:11.616452   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:14.117448   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:14.131224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:14.131297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:14.167811   74485 cri.go:89] found id: ""
	I1105 19:15:14.167843   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.167855   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:14.167862   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:14.167921   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:14.204128   74485 cri.go:89] found id: ""
	I1105 19:15:14.204156   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.204164   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:14.204169   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:14.204232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:14.240687   74485 cri.go:89] found id: ""
	I1105 19:15:14.240716   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.240727   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:14.240735   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:14.240788   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:14.274204   74485 cri.go:89] found id: ""
	I1105 19:15:14.274231   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.274242   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:14.274249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:14.274307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:14.312090   74485 cri.go:89] found id: ""
	I1105 19:15:14.312119   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.312130   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:14.312139   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:14.312200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:14.346824   74485 cri.go:89] found id: ""
	I1105 19:15:14.346857   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.346868   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:14.346875   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:14.346934   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:14.380634   74485 cri.go:89] found id: ""
	I1105 19:15:14.380668   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.380679   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:14.380686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:14.380746   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:14.414402   74485 cri.go:89] found id: ""
	I1105 19:15:14.414432   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.414441   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:14.414449   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:14.414459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:14.464542   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:14.464581   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:14.478195   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:14.478225   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:14.553670   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:14.553693   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:14.553708   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:14.634619   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:14.634659   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.174085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:17.191712   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:17.191771   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:17.234101   74485 cri.go:89] found id: ""
	I1105 19:15:17.234132   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.234143   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:17.234149   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:17.234213   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:17.281548   74485 cri.go:89] found id: ""
	I1105 19:15:17.281574   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.281581   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:17.281588   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:17.281655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:17.337698   74485 cri.go:89] found id: ""
	I1105 19:15:17.337727   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.337735   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:17.337743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:17.337790   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:17.371756   74485 cri.go:89] found id: ""
	I1105 19:15:17.371782   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.371790   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:17.371796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:17.371854   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:17.404989   74485 cri.go:89] found id: ""
	I1105 19:15:17.405015   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.405026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:17.405033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:17.405096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:17.438613   74485 cri.go:89] found id: ""
	I1105 19:15:17.438637   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.438648   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:17.438656   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:17.438717   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:17.470465   74485 cri.go:89] found id: ""
	I1105 19:15:17.470494   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.470502   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:17.470508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:17.470558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:17.503835   74485 cri.go:89] found id: ""
	I1105 19:15:17.503867   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.503876   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:17.503884   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:17.503896   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:17.584110   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:17.584146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.626928   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:17.626955   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:17.679356   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:17.679397   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:17.693476   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:17.693506   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:17.766809   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.266926   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:20.282219   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:20.282293   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:20.322133   74485 cri.go:89] found id: ""
	I1105 19:15:20.322163   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.322171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:20.322178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:20.322248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:20.357030   74485 cri.go:89] found id: ""
	I1105 19:15:20.357072   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.357084   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:20.357091   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:20.357156   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:20.390523   74485 cri.go:89] found id: ""
	I1105 19:15:20.390549   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.390559   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:20.390567   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:20.390631   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:20.425807   74485 cri.go:89] found id: ""
	I1105 19:15:20.425830   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.425837   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:20.425843   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:20.425903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:20.461984   74485 cri.go:89] found id: ""
	I1105 19:15:20.462014   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.462026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:20.462033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:20.462094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:20.495689   74485 cri.go:89] found id: ""
	I1105 19:15:20.495725   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.495739   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:20.495746   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:20.495799   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:20.528666   74485 cri.go:89] found id: ""
	I1105 19:15:20.528701   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.528713   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:20.528721   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:20.528783   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:20.562566   74485 cri.go:89] found id: ""
	I1105 19:15:20.562596   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.562606   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:20.562614   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:20.562624   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:20.610961   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:20.611000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:20.623898   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:20.623928   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:20.696412   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.696440   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:20.696456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:20.779601   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:20.779642   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:23.319846   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:23.333278   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:23.333357   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:23.370771   74485 cri.go:89] found id: ""
	I1105 19:15:23.370796   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.370805   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:23.370810   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:23.370872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:23.405994   74485 cri.go:89] found id: ""
	I1105 19:15:23.406021   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.406029   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:23.406034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:23.406092   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:23.443729   74485 cri.go:89] found id: ""
	I1105 19:15:23.443757   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.443767   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:23.443774   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:23.443836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:23.476162   74485 cri.go:89] found id: ""
	I1105 19:15:23.476188   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.476197   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:23.476205   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:23.476266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:23.509325   74485 cri.go:89] found id: ""
	I1105 19:15:23.509353   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.509363   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:23.509371   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:23.509427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:23.541880   74485 cri.go:89] found id: ""
	I1105 19:15:23.541912   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.541922   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:23.541929   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:23.541993   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:23.574204   74485 cri.go:89] found id: ""
	I1105 19:15:23.574236   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.574248   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:23.574256   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:23.574323   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:23.606865   74485 cri.go:89] found id: ""
	I1105 19:15:23.606896   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.606908   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:23.606918   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:23.606932   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:23.673771   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:23.673792   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:23.673803   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:23.753298   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:23.753335   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:23.792273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:23.792304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:23.843072   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:23.843110   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.356859   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:26.369417   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:26.369488   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:26.403611   74485 cri.go:89] found id: ""
	I1105 19:15:26.403639   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.403647   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:26.403653   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:26.403725   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:26.439891   74485 cri.go:89] found id: ""
	I1105 19:15:26.439924   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.439936   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:26.439943   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:26.439991   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:26.473502   74485 cri.go:89] found id: ""
	I1105 19:15:26.473542   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.473554   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:26.473561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:26.473640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:26.505666   74485 cri.go:89] found id: ""
	I1105 19:15:26.505695   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.505703   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:26.505710   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:26.505769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:26.539781   74485 cri.go:89] found id: ""
	I1105 19:15:26.539815   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.539827   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:26.539835   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:26.539911   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:26.574673   74485 cri.go:89] found id: ""
	I1105 19:15:26.574712   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.574721   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:26.574727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:26.574773   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:26.608410   74485 cri.go:89] found id: ""
	I1105 19:15:26.608433   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.608441   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:26.608446   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:26.608494   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:26.644036   74485 cri.go:89] found id: ""
	I1105 19:15:26.644065   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.644076   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:26.644087   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:26.644098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.718901   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:26.718937   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:26.758920   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:26.758953   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:26.811241   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:26.811277   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.824931   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:26.824961   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:26.891799   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:29.392417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:29.405249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:29.405331   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:29.437397   74485 cri.go:89] found id: ""
	I1105 19:15:29.437432   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.437443   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:29.437450   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:29.437504   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:29.469908   74485 cri.go:89] found id: ""
	I1105 19:15:29.469938   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.469946   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:29.469951   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:29.470008   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:29.502302   74485 cri.go:89] found id: ""
	I1105 19:15:29.502331   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.502339   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:29.502345   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:29.502391   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:29.534285   74485 cri.go:89] found id: ""
	I1105 19:15:29.534309   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.534317   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:29.534322   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:29.534373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:29.571918   74485 cri.go:89] found id: ""
	I1105 19:15:29.571962   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.571973   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:29.571983   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:29.572042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:29.605324   74485 cri.go:89] found id: ""
	I1105 19:15:29.605354   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.605365   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:29.605373   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:29.605435   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:29.640181   74485 cri.go:89] found id: ""
	I1105 19:15:29.640210   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.640218   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:29.640224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:29.640273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:29.671121   74485 cri.go:89] found id: ""
	I1105 19:15:29.671147   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.671155   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:29.671164   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:29.671174   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:29.750821   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:29.750856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:29.787452   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:29.787479   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:29.840413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:29.840459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:29.855540   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:29.855580   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:29.925849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:32.426016   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:32.438759   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:32.438824   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:32.476376   74485 cri.go:89] found id: ""
	I1105 19:15:32.476406   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.476416   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:32.476423   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:32.476490   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:32.512328   74485 cri.go:89] found id: ""
	I1105 19:15:32.512352   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.512360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:32.512365   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:32.512414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:32.546803   74485 cri.go:89] found id: ""
	I1105 19:15:32.546833   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.546844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:32.546851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:32.546925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:32.585904   74485 cri.go:89] found id: ""
	I1105 19:15:32.585934   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.585946   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:32.585953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:32.586014   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:32.620976   74485 cri.go:89] found id: ""
	I1105 19:15:32.621005   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.621012   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:32.621018   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:32.621082   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:32.658958   74485 cri.go:89] found id: ""
	I1105 19:15:32.659006   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.659018   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:32.659026   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:32.659091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:32.694317   74485 cri.go:89] found id: ""
	I1105 19:15:32.694341   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.694349   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:32.694354   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:32.694403   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:32.728277   74485 cri.go:89] found id: ""
	I1105 19:15:32.728314   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.728327   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:32.728338   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.728352   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.815579   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.815615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.856776   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.856807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.909477   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.909518   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.923789   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.923817   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:32.997898   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:35.498040   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:35.511537   74485 kubeadm.go:597] duration metric: took 4m4.46832509s to restartPrimaryControlPlane
	W1105 19:15:35.511612   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:35.511644   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:39.702249   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.19058336s)
	I1105 19:15:39.702314   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.717966   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:39.728114   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:39.740451   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:39.740476   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:39.740519   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:39.751089   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:39.751150   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:39.761832   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:39.771841   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:39.771904   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:39.782332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.792379   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:39.792438   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.801625   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:39.811691   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:39.811740   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:39.821162   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:39.891377   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:15:39.891443   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:40.034176   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:40.034337   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:40.034476   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:15:40.211588   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:40.213724   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:40.213838   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:40.213939   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:40.214048   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:40.214172   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:40.214266   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:40.214375   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:40.214478   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:40.214567   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:40.214687   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:40.214819   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:40.214884   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:40.214980   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:40.358606   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:40.632263   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:40.766570   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:40.885914   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:40.902379   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:40.903647   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:40.903716   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:41.040274   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:41.042093   74485 out.go:235]   - Booting up control plane ...
	I1105 19:15:41.042222   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:41.048448   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:41.058445   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:41.059466   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:41.062648   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:16:21.064069   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:16:21.064607   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:21.064798   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:26.065202   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:26.065410   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:36.065932   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:36.066151   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:56.066834   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:56.067140   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:17:36.069129   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:17:36.069396   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:17:36.069426   74485 kubeadm.go:310] 
	I1105 19:17:36.069489   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:17:36.069572   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:17:36.069591   74485 kubeadm.go:310] 
	I1105 19:17:36.069638   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:17:36.069699   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:17:36.069843   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:17:36.069852   74485 kubeadm.go:310] 
	I1105 19:17:36.069967   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:17:36.070017   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:17:36.070067   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:17:36.070074   74485 kubeadm.go:310] 
	I1105 19:17:36.070216   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:17:36.070328   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:17:36.070345   74485 kubeadm.go:310] 
	I1105 19:17:36.070486   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:17:36.070622   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:17:36.070690   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:17:36.070758   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:17:36.070767   74485 kubeadm.go:310] 
	I1105 19:17:36.071471   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:17:36.071558   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:17:36.071652   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1105 19:17:36.071791   74485 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1105 19:17:36.071838   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:17:36.527864   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:36.543211   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:17:36.552656   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:17:36.552676   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:17:36.552734   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:17:36.562296   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:17:36.562360   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:17:36.571759   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:17:36.580534   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:17:36.580586   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:17:36.590320   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.599165   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:17:36.599235   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.608340   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:17:36.616935   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:17:36.616986   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:17:36.625948   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:17:36.843267   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:19:32.770686   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:19:32.770828   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1105 19:19:32.772504   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:19:32.772564   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:19:32.772656   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:19:32.772784   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:19:32.772893   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:19:32.772971   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:19:32.774648   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:19:32.774726   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:19:32.774804   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:19:32.774902   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:19:32.775012   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:19:32.775144   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:19:32.775223   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:19:32.775307   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:19:32.775397   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:19:32.775487   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:19:32.775597   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:19:32.775651   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:19:32.775728   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:19:32.775796   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:19:32.775864   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:19:32.775961   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:19:32.776041   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:19:32.776175   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:19:32.776281   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:19:32.776330   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:19:32.776417   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:19:32.777837   74485 out.go:235]   - Booting up control plane ...
	I1105 19:19:32.777940   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:19:32.778032   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:19:32.778134   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:19:32.778248   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:19:32.778489   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:19:32.778563   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:19:32.778652   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.778960   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779080   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779302   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779399   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779663   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779766   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779990   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780051   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.780241   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780260   74485 kubeadm.go:310] 
	I1105 19:19:32.780325   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:19:32.780381   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:19:32.780391   74485 kubeadm.go:310] 
	I1105 19:19:32.780438   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:19:32.780486   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:19:32.780627   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:19:32.780639   74485 kubeadm.go:310] 
	I1105 19:19:32.780748   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:19:32.780790   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:19:32.780819   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:19:32.780825   74485 kubeadm.go:310] 
	I1105 19:19:32.780961   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:19:32.781048   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:19:32.781055   74485 kubeadm.go:310] 
	I1105 19:19:32.781144   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:19:32.781225   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:19:32.781293   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:19:32.781394   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:19:32.781475   74485 kubeadm.go:394] duration metric: took 8m1.792270232s to StartCluster
	I1105 19:19:32.781485   74485 kubeadm.go:310] 
	I1105 19:19:32.781522   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:19:32.781589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:19:32.825435   74485 cri.go:89] found id: ""
	I1105 19:19:32.825465   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.825475   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:19:32.825482   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:19:32.825543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:19:32.859245   74485 cri.go:89] found id: ""
	I1105 19:19:32.859275   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.859286   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:19:32.859293   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:19:32.859355   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:19:32.890801   74485 cri.go:89] found id: ""
	I1105 19:19:32.890833   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.890844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:19:32.890851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:19:32.890919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:19:32.925244   74485 cri.go:89] found id: ""
	I1105 19:19:32.925273   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.925280   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:19:32.925287   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:19:32.925352   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:19:32.959091   74485 cri.go:89] found id: ""
	I1105 19:19:32.959118   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.959129   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:19:32.959137   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:19:32.959191   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:19:32.990230   74485 cri.go:89] found id: ""
	I1105 19:19:32.990264   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.990276   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:19:32.990284   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:19:32.990343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:19:33.027461   74485 cri.go:89] found id: ""
	I1105 19:19:33.027494   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.027505   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:19:33.027512   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:19:33.027574   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:19:33.070819   74485 cri.go:89] found id: ""
	I1105 19:19:33.070847   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.070858   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:19:33.070869   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:19:33.070883   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:19:33.122580   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:19:33.122615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:19:33.136015   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:19:33.136043   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:19:33.213727   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:19:33.213750   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:19:33.213762   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:19:33.324287   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:19:33.324333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1105 19:19:33.384732   74485 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1105 19:19:33.384785   74485 out.go:270] * 
	* 
	W1105 19:19:33.384844   74485 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.384857   74485 out.go:270] * 
	* 
	W1105 19:19:33.385632   74485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:19:33.388860   74485 out.go:201] 
	W1105 19:19:33.390328   74485 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.390366   74485 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1105 19:19:33.390393   74485 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1105 19:19:33.391785   74485 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-567666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 2 (225.514306ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-567666 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-567666 logs -n 25: (1.518797719s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-929548 sudo cat                              | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo find                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo crio                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-929548                                       | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-537175 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | disable-driver-mounts-537175                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:04 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-459223             | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-271881            | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-608095  | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC | 05 Nov 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-459223                  | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-271881                 | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-567666        | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-608095       | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:15 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-567666             | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 19:07:52.649090   74485 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:07:52.649200   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649205   74485 out.go:358] Setting ErrFile to fd 2...
	I1105 19:07:52.649210   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649374   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:07:52.649909   74485 out.go:352] Setting JSON to false
	I1105 19:07:52.650785   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6615,"bootTime":1730827058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:07:52.650878   74485 start.go:139] virtualization: kvm guest
	I1105 19:07:52.652866   74485 out.go:177] * [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:07:52.654107   74485 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:07:52.654107   74485 notify.go:220] Checking for updates...
	I1105 19:07:52.655282   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:07:52.656379   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:07:52.657451   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:07:52.658694   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:07:52.659835   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:07:52.661251   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:07:52.661622   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.661660   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.677005   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I1105 19:07:52.677521   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.678096   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.678118   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.678489   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.678735   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.680466   74485 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1105 19:07:52.681734   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:07:52.682087   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.682139   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.697071   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1105 19:07:52.697503   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.697958   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.697980   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.698259   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.698439   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.732962   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 19:07:52.734079   74485 start.go:297] selected driver: kvm2
	I1105 19:07:52.734094   74485 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.734209   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:07:52.734912   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.735038   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:07:52.750214   74485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:07:52.750609   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:07:52.750641   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:07:52.750697   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:07:52.750745   74485 start.go:340] cluster config:
	{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.750877   74485 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.753288   74485 out.go:177] * Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	I1105 19:07:50.739209   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:53.811246   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:52.754354   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:07:52.754407   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 19:07:52.754425   74485 cache.go:56] Caching tarball of preloaded images
	I1105 19:07:52.754503   74485 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:07:52.754515   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 19:07:52.754610   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:07:52.754817   74485 start.go:360] acquireMachinesLock for old-k8s-version-567666: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:07:59.891257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:02.963247   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:09.043263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:12.115289   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:18.195275   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:21.267213   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:27.347251   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:30.419240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:36.499291   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:39.571255   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:45.651258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:48.723262   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:54.803265   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:57.875236   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:03.955241   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:07.027229   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:13.107258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:16.179257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:22.259227   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:25.331263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:31.411234   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:34.483240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:40.563258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:43.635253   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:49.715287   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:52.787276   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:58.867242   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:01.939296   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:08.019268   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:11.091350   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:17.171266   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:20.243245   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:23.247511   73732 start.go:364] duration metric: took 4m30.277290481s to acquireMachinesLock for "embed-certs-271881"
	I1105 19:10:23.247565   73732 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:23.247590   73732 fix.go:54] fixHost starting: 
	I1105 19:10:23.248173   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:23.248235   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:23.263573   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I1105 19:10:23.264016   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:23.264437   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:10:23.264461   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:23.264888   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:23.265122   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:23.265311   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:10:23.267000   73732 fix.go:112] recreateIfNeeded on embed-certs-271881: state=Stopped err=<nil>
	I1105 19:10:23.267031   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	W1105 19:10:23.267183   73732 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:23.269188   73732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-271881" ...
	I1105 19:10:23.244961   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:23.245021   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245327   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:10:23.245352   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245536   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:10:23.247352   73496 machine.go:96] duration metric: took 4m37.425023044s to provisionDockerMachine
	I1105 19:10:23.247393   73496 fix.go:56] duration metric: took 4m37.446801616s for fixHost
	I1105 19:10:23.247400   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 4m37.446835698s
	W1105 19:10:23.247424   73496 start.go:714] error starting host: provision: host is not running
	W1105 19:10:23.247522   73496 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1105 19:10:23.247534   73496 start.go:729] Will try again in 5 seconds ...
	I1105 19:10:23.270443   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Start
	I1105 19:10:23.270681   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring networks are active...
	I1105 19:10:23.271552   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network default is active
	I1105 19:10:23.271924   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network mk-embed-certs-271881 is active
	I1105 19:10:23.272243   73732 main.go:141] libmachine: (embed-certs-271881) Getting domain xml...
	I1105 19:10:23.273027   73732 main.go:141] libmachine: (embed-certs-271881) Creating domain...
	I1105 19:10:24.503219   73732 main.go:141] libmachine: (embed-certs-271881) Waiting to get IP...
	I1105 19:10:24.504067   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.504444   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.504503   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.504415   75020 retry.go:31] will retry after 194.539819ms: waiting for machine to come up
	I1105 19:10:24.701086   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.701552   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.701579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.701501   75020 retry.go:31] will retry after 361.371677ms: waiting for machine to come up
	I1105 19:10:25.064078   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.064484   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.064512   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.064433   75020 retry.go:31] will retry after 442.206433ms: waiting for machine to come up
	I1105 19:10:25.507981   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.508380   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.508405   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.508338   75020 retry.go:31] will retry after 573.453662ms: waiting for machine to come up
	I1105 19:10:26.083299   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.083727   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.083753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.083670   75020 retry.go:31] will retry after 686.210957ms: waiting for machine to come up
	I1105 19:10:26.771637   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.772070   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.772112   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.772062   75020 retry.go:31] will retry after 685.825223ms: waiting for machine to come up
	I1105 19:10:27.459230   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:27.459652   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:27.459677   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:27.459616   75020 retry.go:31] will retry after 1.167971852s: waiting for machine to come up
	I1105 19:10:28.247729   73496 start.go:360] acquireMachinesLock for no-preload-459223: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:10:28.629194   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:28.629526   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:28.629549   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:28.629488   75020 retry.go:31] will retry after 1.180980288s: waiting for machine to come up
	I1105 19:10:29.812048   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:29.812445   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:29.812475   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:29.812390   75020 retry.go:31] will retry after 1.527253183s: waiting for machine to come up
	I1105 19:10:31.342147   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:31.342519   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:31.342546   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:31.342467   75020 retry.go:31] will retry after 1.597485878s: waiting for machine to come up
	I1105 19:10:32.942141   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:32.942459   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:32.942505   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:32.942431   75020 retry.go:31] will retry after 2.416793509s: waiting for machine to come up
	I1105 19:10:35.360354   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:35.360717   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:35.360743   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:35.360674   75020 retry.go:31] will retry after 3.193637492s: waiting for machine to come up
	I1105 19:10:38.556294   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:38.556744   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:38.556775   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:38.556673   75020 retry.go:31] will retry after 3.819760443s: waiting for machine to come up
	I1105 19:10:42.380607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381140   73732 main.go:141] libmachine: (embed-certs-271881) Found IP for machine: 192.168.39.58
	I1105 19:10:42.381172   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has current primary IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381196   73732 main.go:141] libmachine: (embed-certs-271881) Reserving static IP address...
	I1105 19:10:42.381607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.381634   73732 main.go:141] libmachine: (embed-certs-271881) Reserved static IP address: 192.168.39.58
	I1105 19:10:42.381647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | skip adding static IP to network mk-embed-certs-271881 - found existing host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"}
	I1105 19:10:42.381671   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Getting to WaitForSSH function...
	I1105 19:10:42.381686   73732 main.go:141] libmachine: (embed-certs-271881) Waiting for SSH to be available...
	I1105 19:10:42.383908   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384306   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.384333   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384427   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH client type: external
	I1105 19:10:42.384458   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa (-rw-------)
	I1105 19:10:42.384486   73732 main.go:141] libmachine: (embed-certs-271881) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:10:42.384502   73732 main.go:141] libmachine: (embed-certs-271881) DBG | About to run SSH command:
	I1105 19:10:42.384510   73732 main.go:141] libmachine: (embed-certs-271881) DBG | exit 0
	I1105 19:10:42.506807   73732 main.go:141] libmachine: (embed-certs-271881) DBG | SSH cmd err, output: <nil>: 
	I1105 19:10:42.507217   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetConfigRaw
	I1105 19:10:42.507868   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.510314   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.510680   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510936   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/config.json ...
	I1105 19:10:42.511183   73732 machine.go:93] provisionDockerMachine start ...
	I1105 19:10:42.511203   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:42.511426   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.513759   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514111   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.514144   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514290   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.514473   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514654   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514827   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.514979   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.515191   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.515202   73732 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:10:42.619241   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:10:42.619273   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619517   73732 buildroot.go:166] provisioning hostname "embed-certs-271881"
	I1105 19:10:42.619555   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619735   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.622695   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623117   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.623146   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623304   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.623465   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623632   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623825   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.623957   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.624122   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.624135   73732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-271881 && echo "embed-certs-271881" | sudo tee /etc/hostname
	I1105 19:10:42.740722   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-271881
	
	I1105 19:10:42.740749   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.743579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.743922   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.743945   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.744160   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.744343   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744470   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.744756   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.744950   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.744972   73732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-271881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-271881/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-271881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:10:42.854869   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:42.854898   73732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:10:42.854926   73732 buildroot.go:174] setting up certificates
	I1105 19:10:42.854940   73732 provision.go:84] configureAuth start
	I1105 19:10:42.854948   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.855222   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.857913   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858228   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.858252   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858440   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.860753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861041   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.861062   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861222   73732 provision.go:143] copyHostCerts
	I1105 19:10:42.861274   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:10:42.861291   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:10:42.861385   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:10:42.861543   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:10:42.861556   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:10:42.861595   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:10:42.861671   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:10:42.861681   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:10:42.861713   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:10:42.861781   73732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.embed-certs-271881 san=[127.0.0.1 192.168.39.58 embed-certs-271881 localhost minikube]
	I1105 19:10:43.659372   74141 start.go:364] duration metric: took 3m39.006624915s to acquireMachinesLock for "default-k8s-diff-port-608095"
	I1105 19:10:43.659450   74141 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:43.659458   74141 fix.go:54] fixHost starting: 
	I1105 19:10:43.659814   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:43.659874   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:43.677604   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I1105 19:10:43.678132   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:43.678624   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:10:43.678649   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:43.679047   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:43.679250   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:10:43.679407   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:10:43.681036   74141 fix.go:112] recreateIfNeeded on default-k8s-diff-port-608095: state=Stopped err=<nil>
	I1105 19:10:43.681063   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	W1105 19:10:43.681194   74141 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:43.683110   74141 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-608095" ...
	I1105 19:10:43.684451   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Start
	I1105 19:10:43.684639   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring networks are active...
	I1105 19:10:43.685436   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network default is active
	I1105 19:10:43.685983   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network mk-default-k8s-diff-port-608095 is active
	I1105 19:10:43.686398   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Getting domain xml...
	I1105 19:10:43.687143   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Creating domain...
	I1105 19:10:43.044648   73732 provision.go:177] copyRemoteCerts
	I1105 19:10:43.044703   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:10:43.044730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.047120   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047506   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.047538   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047717   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.047886   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.048037   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.048186   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.129098   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:10:43.154632   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1105 19:10:43.179681   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 19:10:43.205598   73732 provision.go:87] duration metric: took 350.648117ms to configureAuth
	I1105 19:10:43.205622   73732 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:10:43.205822   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:10:43.205900   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.208446   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.208763   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.208799   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.209006   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.209190   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209489   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.209611   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.209828   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.209850   73732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:10:43.431540   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:10:43.431569   73732 machine.go:96] duration metric: took 920.370689ms to provisionDockerMachine
	I1105 19:10:43.431582   73732 start.go:293] postStartSetup for "embed-certs-271881" (driver="kvm2")
	I1105 19:10:43.431595   73732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:10:43.431617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.431912   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:10:43.431940   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.434821   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435170   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.435193   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435338   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.435532   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.435714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.435851   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.517391   73732 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:10:43.521532   73732 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:10:43.521553   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:10:43.521632   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:10:43.521721   73732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:10:43.521839   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:10:43.531045   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:43.556596   73732 start.go:296] duration metric: took 125.000692ms for postStartSetup
	I1105 19:10:43.556634   73732 fix.go:56] duration metric: took 20.309059136s for fixHost
	I1105 19:10:43.556663   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.558888   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559181   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.559220   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.559531   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559674   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.559934   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.560096   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.560106   73732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:10:43.659219   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833843.637801657
	
	I1105 19:10:43.659240   73732 fix.go:216] guest clock: 1730833843.637801657
	I1105 19:10:43.659247   73732 fix.go:229] Guest: 2024-11-05 19:10:43.637801657 +0000 UTC Remote: 2024-11-05 19:10:43.556637855 +0000 UTC m=+290.729857868 (delta=81.163802ms)
	I1105 19:10:43.659284   73732 fix.go:200] guest clock delta is within tolerance: 81.163802ms
	I1105 19:10:43.659290   73732 start.go:83] releasing machines lock for "embed-certs-271881", held for 20.411743975s
	I1105 19:10:43.659324   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.659589   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:43.662581   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663025   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.663058   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663214   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663907   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.664017   73732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:10:43.664057   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.664108   73732 ssh_runner.go:195] Run: cat /version.json
	I1105 19:10:43.664131   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.666998   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667059   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667365   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667395   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667424   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667438   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667543   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667638   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667897   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667968   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667996   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.668078   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.775067   73732 ssh_runner.go:195] Run: systemctl --version
	I1105 19:10:43.780892   73732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:10:43.919564   73732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:10:43.926362   73732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:10:43.926422   73732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:10:43.942359   73732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:10:43.942378   73732 start.go:495] detecting cgroup driver to use...
	I1105 19:10:43.942450   73732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:10:43.964650   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:10:43.980651   73732 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:10:43.980717   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:10:43.993988   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:10:44.007440   73732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:10:44.132040   73732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:10:44.314220   73732 docker.go:233] disabling docker service ...
	I1105 19:10:44.314294   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:10:44.337362   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:10:44.351277   73732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:10:44.485105   73732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:10:44.621596   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:10:44.636254   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:10:44.656530   73732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:10:44.656595   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.667156   73732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:10:44.667237   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.682233   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.692814   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.704688   73732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:10:44.721662   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.738629   73732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.754944   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.765089   73732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:10:44.774147   73732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:10:44.774210   73732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:10:44.786312   73732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:10:44.795892   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:44.926823   73732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:10:45.022945   73732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:10:45.023042   73732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:10:45.027389   73732 start.go:563] Will wait 60s for crictl version
	I1105 19:10:45.027451   73732 ssh_runner.go:195] Run: which crictl
	I1105 19:10:45.030701   73732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:10:45.067294   73732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:10:45.067410   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.094394   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.123459   73732 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:10:45.124645   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:45.127396   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.127794   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:45.127833   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.128104   73732 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 19:10:45.131923   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:45.143951   73732 kubeadm.go:883] updating cluster {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:10:45.144078   73732 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:10:45.144125   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:45.177770   73732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:10:45.177830   73732 ssh_runner.go:195] Run: which lz4
	I1105 19:10:45.181571   73732 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:10:45.186569   73732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:10:45.186602   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:10:46.442865   73732 crio.go:462] duration metric: took 1.26132812s to copy over tarball
	I1105 19:10:46.442959   73732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:10:44.962206   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting to get IP...
	I1105 19:10:44.963032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963397   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963492   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:44.963380   75165 retry.go:31] will retry after 274.297859ms: waiting for machine to come up
	I1105 19:10:45.239024   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239453   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.239406   75165 retry.go:31] will retry after 239.892312ms: waiting for machine to come up
	I1105 19:10:45.481036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481584   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.481569   75165 retry.go:31] will retry after 360.538082ms: waiting for machine to come up
	I1105 19:10:45.844144   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844565   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844596   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.844533   75165 retry.go:31] will retry after 387.597088ms: waiting for machine to come up
	I1105 19:10:46.234241   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234798   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.234738   75165 retry.go:31] will retry after 597.596298ms: waiting for machine to come up
	I1105 19:10:46.833721   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834170   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.834142   75165 retry.go:31] will retry after 688.240413ms: waiting for machine to come up
	I1105 19:10:47.523898   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524412   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524442   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:47.524377   75165 retry.go:31] will retry after 826.38207ms: waiting for machine to come up
	I1105 19:10:48.352258   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352787   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352809   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:48.352681   75165 retry.go:31] will retry after 1.381579847s: waiting for machine to come up
	I1105 19:10:48.547186   73732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104175993s)
	I1105 19:10:48.547221   73732 crio.go:469] duration metric: took 2.104326973s to extract the tarball
	I1105 19:10:48.547231   73732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:10:48.583027   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:48.630180   73732 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:10:48.630208   73732 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:10:48.630218   73732 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.31.2 crio true true} ...
	I1105 19:10:48.630349   73732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-271881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:10:48.630412   73732 ssh_runner.go:195] Run: crio config
	I1105 19:10:48.682182   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:48.682204   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:48.682213   73732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:10:48.682232   73732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-271881 NodeName:embed-certs-271881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:10:48.682354   73732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-271881"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:10:48.682412   73732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:10:48.691968   73732 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:10:48.692031   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:10:48.700980   73732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:10:48.716797   73732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:10:48.732408   73732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1105 19:10:48.748354   73732 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1105 19:10:48.751791   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:48.763068   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:48.893747   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:10:48.910247   73732 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881 for IP: 192.168.39.58
	I1105 19:10:48.910270   73732 certs.go:194] generating shared ca certs ...
	I1105 19:10:48.910303   73732 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:10:48.910488   73732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:10:48.910547   73732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:10:48.910561   73732 certs.go:256] generating profile certs ...
	I1105 19:10:48.910673   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/client.key
	I1105 19:10:48.910768   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key.0a454894
	I1105 19:10:48.910837   73732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key
	I1105 19:10:48.911021   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:10:48.911059   73732 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:10:48.911071   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:10:48.911116   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:10:48.911160   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:10:48.911196   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:10:48.911265   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:48.912104   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:10:48.969066   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:10:49.000713   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:10:49.040367   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:10:49.068456   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1105 19:10:49.094166   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:10:49.115986   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:10:49.137770   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:10:49.161140   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:10:49.182996   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:10:49.206578   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:10:49.230006   73732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:10:49.245835   73732 ssh_runner.go:195] Run: openssl version
	I1105 19:10:49.251252   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:10:49.261237   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265318   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265398   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.270753   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:10:49.280568   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:10:49.290580   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294567   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294644   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.299812   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:10:49.309398   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:10:49.319451   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323490   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323543   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.328708   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:10:49.338805   73732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:10:49.342918   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:10:49.348526   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:10:49.353943   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:10:49.359527   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:10:49.364886   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:10:49.370119   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:10:49.375437   73732 kubeadm.go:392] StartCluster: {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:10:49.375531   73732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:10:49.375572   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.415844   73732 cri.go:89] found id: ""
	I1105 19:10:49.415916   73732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:10:49.425336   73732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:10:49.425402   73732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:10:49.425474   73732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:10:49.434717   73732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:10:49.435831   73732 kubeconfig.go:125] found "embed-certs-271881" server: "https://192.168.39.58:8443"
	I1105 19:10:49.437903   73732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:10:49.446625   73732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I1105 19:10:49.446657   73732 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:10:49.446668   73732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:10:49.446732   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.479546   73732 cri.go:89] found id: ""
	I1105 19:10:49.479639   73732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:10:49.499034   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:10:49.510134   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:10:49.510159   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:10:49.510203   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:10:49.520482   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:10:49.520544   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:10:49.530750   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:10:49.539113   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:10:49.539183   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:10:49.548104   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.556754   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:10:49.556811   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.565606   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:10:49.574023   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:10:49.574091   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:10:49.582888   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:10:49.591876   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:49.688517   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.070191   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.38163928s)
	I1105 19:10:51.070240   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.267774   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.329051   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.406120   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:10:51.406226   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:51.907080   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:52.406468   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:49.735558   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735923   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735987   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:49.735914   75165 retry.go:31] will retry after 1.132319443s: waiting for machine to come up
	I1105 19:10:50.870267   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870770   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870801   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:50.870715   75165 retry.go:31] will retry after 1.791598796s: waiting for machine to come up
	I1105 19:10:52.664538   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665055   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:52.664912   75165 retry.go:31] will retry after 1.910294965s: waiting for machine to come up
	I1105 19:10:52.907103   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.407319   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.421763   73732 api_server.go:72] duration metric: took 2.015640262s to wait for apiserver process to appear ...
	I1105 19:10:53.421794   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:10:53.421816   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.752768   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.752803   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.752819   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.772365   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.772412   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.922705   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.928293   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:55.928329   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.422875   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.430633   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.430667   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.922156   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.934958   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.935016   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:57.422646   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:57.428784   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:10:57.435298   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:10:57.435319   73732 api_server.go:131] duration metric: took 4.013519207s to wait for apiserver health ...
	I1105 19:10:57.435327   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:57.435333   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:57.437061   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:10:57.438374   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:10:57.448509   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:10:57.465994   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:10:57.474649   73732 system_pods.go:59] 8 kube-system pods found
	I1105 19:10:57.474682   73732 system_pods.go:61] "coredns-7c65d6cfc9-nwzpq" [be8aa054-3f68-4c19-bae3-9d9cfcb51869] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:10:57.474691   73732 system_pods.go:61] "etcd-embed-certs-271881" [c37c829b-1dca-4659-b24c-4559304d9fe0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:10:57.474703   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [6df78e2a-1360-4c4b-b451-c96aa60f24ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:10:57.474710   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [95a6baca-c246-4043-acbc-235b076a89b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:10:57.474723   73732 system_pods.go:61] "kube-proxy-f945s" [2cb835f0-3727-4dd1-bd21-a21554ffdc0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 19:10:57.474730   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [53e044c5-199c-46f4-b3db-d3b65a8203aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:10:57.474741   73732 system_pods.go:61] "metrics-server-6867b74b74-vw2sm" [403d0c5f-d870-4f89-8caa-f5e9c8bf9ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:10:57.474748   73732 system_pods.go:61] "storage-provisioner" [13a89bf9-fb97-413a-9948-1c69780784cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 19:10:57.474758   73732 system_pods.go:74] duration metric: took 8.737357ms to wait for pod list to return data ...
	I1105 19:10:57.474769   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:10:57.480599   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:10:57.480623   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:10:57.480634   73732 node_conditions.go:105] duration metric: took 5.857622ms to run NodePressure ...
	I1105 19:10:57.480651   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:54.577390   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577939   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577969   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:54.577885   75165 retry.go:31] will retry after 3.393120773s: waiting for machine to come up
	I1105 19:10:57.971960   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972441   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:57.972370   75165 retry.go:31] will retry after 4.425954537s: waiting for machine to come up
	I1105 19:10:57.896717   73732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902115   73732 kubeadm.go:739] kubelet initialised
	I1105 19:10:57.902138   73732 kubeadm.go:740] duration metric: took 5.39576ms waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902152   73732 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:10:57.907293   73732 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:10:59.913946   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:02.414802   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:03.663928   74485 start.go:364] duration metric: took 3m10.909065205s to acquireMachinesLock for "old-k8s-version-567666"
	I1105 19:11:03.664023   74485 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:03.664038   74485 fix.go:54] fixHost starting: 
	I1105 19:11:03.664514   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:03.664569   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:03.682846   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I1105 19:11:03.683341   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:03.683786   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:11:03.683812   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:03.684219   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:03.684407   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:03.684552   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetState
	I1105 19:11:03.686262   74485 fix.go:112] recreateIfNeeded on old-k8s-version-567666: state=Stopped err=<nil>
	I1105 19:11:03.686295   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	W1105 19:11:03.686440   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:03.688047   74485 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-567666" ...
	I1105 19:11:02.401454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.401980   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Found IP for machine: 192.168.50.10
	I1105 19:11:02.402015   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has current primary IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.402025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserving static IP address...
	I1105 19:11:02.402384   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.402413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserved static IP address: 192.168.50.10
	I1105 19:11:02.402432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | skip adding static IP to network mk-default-k8s-diff-port-608095 - found existing host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"}
	I1105 19:11:02.402445   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for SSH to be available...
	I1105 19:11:02.402461   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Getting to WaitForSSH function...
	I1105 19:11:02.404454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404751   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.404778   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404915   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH client type: external
	I1105 19:11:02.404964   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa (-rw-------)
	I1105 19:11:02.405032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:02.405059   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | About to run SSH command:
	I1105 19:11:02.405072   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | exit 0
	I1105 19:11:02.526769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:02.527147   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetConfigRaw
	I1105 19:11:02.527756   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.530014   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530325   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.530357   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530527   74141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/config.json ...
	I1105 19:11:02.530708   74141 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:02.530728   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:02.530921   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.532868   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533184   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.533215   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533334   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.533493   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533630   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533761   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.533930   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.534116   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.534128   74141 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:02.631085   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:02.631114   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631351   74141 buildroot.go:166] provisioning hostname "default-k8s-diff-port-608095"
	I1105 19:11:02.631376   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631540   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.634037   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634371   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.634400   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634517   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.634691   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634849   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634995   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.635136   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.635310   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.635326   74141 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-608095 && echo "default-k8s-diff-port-608095" | sudo tee /etc/hostname
	I1105 19:11:02.744298   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-608095
	
	I1105 19:11:02.744327   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.747036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747348   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.747379   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747555   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.747716   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747846   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747940   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.748061   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.748266   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.748284   74141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-608095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-608095/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-608095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:02.850828   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:02.850854   74141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:02.850906   74141 buildroot.go:174] setting up certificates
	I1105 19:11:02.850923   74141 provision.go:84] configureAuth start
	I1105 19:11:02.850935   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.851260   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.853803   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854062   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.854088   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854203   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.856341   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856629   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.856659   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856747   74141 provision.go:143] copyHostCerts
	I1105 19:11:02.856804   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:02.856823   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:02.856874   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:02.856987   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:02.856997   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:02.857017   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:02.857075   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:02.857082   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:02.857100   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:02.857148   74141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-608095 san=[127.0.0.1 192.168.50.10 default-k8s-diff-port-608095 localhost minikube]
	I1105 19:11:03.048307   74141 provision.go:177] copyRemoteCerts
	I1105 19:11:03.048362   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:03.048386   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.050951   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051303   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.051353   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051556   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.051785   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.051953   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.052084   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.128441   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:03.150680   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1105 19:11:03.172480   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:03.194311   74141 provision.go:87] duration metric: took 343.374586ms to configureAuth
	I1105 19:11:03.194338   74141 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:03.194499   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:03.194560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.197209   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197585   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.197603   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197822   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.198006   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198168   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198336   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.198503   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.198686   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.198706   74141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:03.429895   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:03.429926   74141 machine.go:96] duration metric: took 899.201597ms to provisionDockerMachine
	I1105 19:11:03.429941   74141 start.go:293] postStartSetup for "default-k8s-diff-port-608095" (driver="kvm2")
	I1105 19:11:03.429955   74141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:03.429976   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.430329   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:03.430364   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.433455   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.433791   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.433820   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.434009   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.434323   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.434500   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.434659   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.514652   74141 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:03.518678   74141 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:03.518711   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:03.518774   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:03.518877   74141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:03.519014   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:03.528972   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:03.555892   74141 start.go:296] duration metric: took 125.936355ms for postStartSetup
	I1105 19:11:03.555939   74141 fix.go:56] duration metric: took 19.896481237s for fixHost
	I1105 19:11:03.555966   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.558764   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559153   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.559183   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559402   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.559610   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559788   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559933   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.560116   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.560292   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.560303   74141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:03.663723   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833863.637227261
	
	I1105 19:11:03.663751   74141 fix.go:216] guest clock: 1730833863.637227261
	I1105 19:11:03.663766   74141 fix.go:229] Guest: 2024-11-05 19:11:03.637227261 +0000 UTC Remote: 2024-11-05 19:11:03.555945261 +0000 UTC m=+239.048686257 (delta=81.282ms)
	I1105 19:11:03.663815   74141 fix.go:200] guest clock delta is within tolerance: 81.282ms
	I1105 19:11:03.663822   74141 start.go:83] releasing machines lock for "default-k8s-diff-port-608095", held for 20.004399519s
	I1105 19:11:03.663858   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.664158   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:03.666922   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667372   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.667408   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668101   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668297   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668412   74141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:03.668478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.668748   74141 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:03.668774   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.671463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671781   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.671810   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671903   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672175   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672333   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.672369   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.672417   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672578   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.672598   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672779   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.673106   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.777585   74141 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:03.783343   74141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:03.927951   74141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:03.933308   74141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:03.933380   74141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:03.948472   74141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:03.948499   74141 start.go:495] detecting cgroup driver to use...
	I1105 19:11:03.948572   74141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:03.963929   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:03.978578   74141 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:03.978643   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:03.992096   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:04.006036   74141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:04.114061   74141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:04.274136   74141 docker.go:233] disabling docker service ...
	I1105 19:11:04.274220   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:04.287806   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:04.300294   74141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:04.429899   74141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:04.576075   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:04.590934   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:04.611299   74141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:04.611375   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.623876   74141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:04.623949   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.634333   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.644768   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.654549   74141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:04.665001   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.675464   74141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.693845   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.703982   74141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:04.713758   74141 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:04.713820   74141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:04.727618   74141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:04.737679   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:04.866928   74141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:04.966529   74141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:04.966599   74141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:04.971536   74141 start.go:563] Will wait 60s for crictl version
	I1105 19:11:04.971602   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:11:04.975344   74141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:05.015910   74141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:05.015987   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.043577   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.072767   74141 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:03.689374   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .Start
	I1105 19:11:03.689560   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring networks are active...
	I1105 19:11:03.690290   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network default is active
	I1105 19:11:03.690659   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network mk-old-k8s-version-567666 is active
	I1105 19:11:03.691130   74485 main.go:141] libmachine: (old-k8s-version-567666) Getting domain xml...
	I1105 19:11:03.691890   74485 main.go:141] libmachine: (old-k8s-version-567666) Creating domain...
	I1105 19:11:05.006949   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting to get IP...
	I1105 19:11:05.008062   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.008547   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.008605   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.008523   75309 retry.go:31] will retry after 290.124771ms: waiting for machine to come up
	I1105 19:11:05.300185   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.300768   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.300803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.300717   75309 retry.go:31] will retry after 292.829683ms: waiting for machine to come up
	I1105 19:11:05.595365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.595881   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.595907   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.595831   75309 retry.go:31] will retry after 447.168257ms: waiting for machine to come up
	I1105 19:11:06.045320   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.045946   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.045976   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.045893   75309 retry.go:31] will retry after 420.272812ms: waiting for machine to come up
	I1105 19:11:06.467556   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.468012   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.468039   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.467962   75309 retry.go:31] will retry after 657.733497ms: waiting for machine to come up
	I1105 19:11:07.128022   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:07.128531   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:07.128559   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:07.128484   75309 retry.go:31] will retry after 922.664226ms: waiting for machine to come up
	I1105 19:11:04.416533   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:06.915445   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:07.417579   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:07.417610   73732 pod_ready.go:82] duration metric: took 9.510292246s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:07.417620   73732 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:05.073913   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:05.077086   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077430   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:05.077468   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077691   74141 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:05.081724   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:05.093668   74141 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:05.093785   74141 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:05.093853   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:05.128693   74141 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:05.128753   74141 ssh_runner.go:195] Run: which lz4
	I1105 19:11:05.133116   74141 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:05.137101   74141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:05.137126   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:11:06.379012   74141 crio.go:462] duration metric: took 1.245926141s to copy over tarball
	I1105 19:11:06.379088   74141 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:08.545369   74141 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.166238549s)
	I1105 19:11:08.545405   74141 crio.go:469] duration metric: took 2.166364449s to extract the tarball
	I1105 19:11:08.545422   74141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:08.581651   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:08.628768   74141 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:11:08.628795   74141 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:11:08.628805   74141 kubeadm.go:934] updating node { 192.168.50.10 8444 v1.31.2 crio true true} ...
	I1105 19:11:08.628937   74141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-608095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:08.629056   74141 ssh_runner.go:195] Run: crio config
	I1105 19:11:08.690112   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:08.690140   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:08.690152   74141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:08.690184   74141 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-608095 NodeName:default-k8s-diff-port-608095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:08.690346   74141 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-608095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:08.690415   74141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:08.700222   74141 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:08.700294   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:08.709542   74141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1105 19:11:08.725723   74141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:08.741985   74141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1105 19:11:08.758655   74141 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:08.762296   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:08.774119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:08.910000   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:08.926765   74141 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095 for IP: 192.168.50.10
	I1105 19:11:08.926788   74141 certs.go:194] generating shared ca certs ...
	I1105 19:11:08.926806   74141 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:08.927006   74141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:08.927069   74141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:08.927080   74141 certs.go:256] generating profile certs ...
	I1105 19:11:08.927157   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/client.key
	I1105 19:11:08.927229   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key.f2b96156
	I1105 19:11:08.927281   74141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key
	I1105 19:11:08.927456   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:08.927506   74141 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:08.927516   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:08.927549   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:08.927585   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:08.927620   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:08.927682   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:08.928417   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:08.971359   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:09.011632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:09.049748   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:09.078632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 19:11:09.105786   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:09.127855   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:09.151461   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:11:09.174068   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:09.196733   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:09.219111   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:09.241335   74141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:09.257040   74141 ssh_runner.go:195] Run: openssl version
	I1105 19:11:09.262371   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:09.272232   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276300   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276362   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.281747   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:09.291864   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:09.302012   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306085   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306142   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.311374   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:09.321334   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:09.331208   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335401   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335451   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.340595   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:09.350430   74141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:09.354622   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:09.360165   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:09.365624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:09.371545   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:09.377226   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:09.382624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:09.387929   74141 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:09.388032   74141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:09.388076   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.429707   74141 cri.go:89] found id: ""
	I1105 19:11:09.429783   74141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:09.440455   74141 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:09.440476   74141 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:09.440527   74141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:09.451745   74141 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:09.452609   74141 kubeconfig.go:125] found "default-k8s-diff-port-608095" server: "https://192.168.50.10:8444"
	I1105 19:11:09.454539   74141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:09.463900   74141 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.10
	I1105 19:11:09.463926   74141 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:09.463936   74141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:09.463987   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.497583   74141 cri.go:89] found id: ""
	I1105 19:11:09.497656   74141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:09.513767   74141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:09.523219   74141 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:09.523237   74141 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:09.523284   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1105 19:11:09.533116   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:09.533181   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:09.542453   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1105 19:11:08.053120   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:08.053610   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:08.053636   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:08.053587   75309 retry.go:31] will retry after 947.415519ms: waiting for machine to come up
	I1105 19:11:09.002803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:09.003423   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:09.003452   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:09.003363   75309 retry.go:31] will retry after 1.07978111s: waiting for machine to come up
	I1105 19:11:10.084404   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:10.084808   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:10.084830   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:10.084784   75309 retry.go:31] will retry after 1.482510322s: waiting for machine to come up
	I1105 19:11:11.568421   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:11.568840   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:11.568869   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:11.568791   75309 retry.go:31] will retry after 1.630983434s: waiting for machine to come up
	I1105 19:11:08.426308   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.426337   73732 pod_ready.go:82] duration metric: took 1.008708779s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.426350   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432238   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.432264   73732 pod_ready.go:82] duration metric: took 5.905051ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432276   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438187   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.438214   73732 pod_ready.go:82] duration metric: took 5.9294ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438226   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443794   73732 pod_ready.go:93] pod "kube-proxy-f945s" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.443823   73732 pod_ready.go:82] duration metric: took 5.587862ms for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443835   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:10.449498   73732 pod_ready.go:103] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:12.454934   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:12.454965   73732 pod_ready.go:82] duration metric: took 4.011121022s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:12.455003   73732 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:09.551174   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:09.551235   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:09.560481   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.571928   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:09.571997   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.583935   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1105 19:11:09.595336   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:09.595401   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:09.605061   74141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:09.613920   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:09.718759   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.680100   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.901034   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.951868   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.997866   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:10.997956   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.498113   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.998192   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.498517   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.998919   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:13.013078   74141 api_server.go:72] duration metric: took 2.01520799s to wait for apiserver process to appear ...
	I1105 19:11:13.013106   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:11:13.013136   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.042333   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.042388   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.042404   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.085574   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.085602   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.513733   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.518755   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:16.518789   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.013278   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.019214   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:17.019236   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.513886   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.519036   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:11:17.528970   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:11:17.529000   74141 api_server.go:131] duration metric: took 4.515887773s to wait for apiserver health ...
	I1105 19:11:17.529009   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:17.529016   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:17.530429   74141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:11:13.201891   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:13.202425   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:13.202453   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:13.202387   75309 retry.go:31] will retry after 2.689744765s: waiting for machine to come up
	I1105 19:11:15.893632   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:15.893989   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:15.894034   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:15.893964   75309 retry.go:31] will retry after 2.460566804s: waiting for machine to come up
	I1105 19:11:14.465748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:16.961287   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:17.531600   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:11:17.544876   74141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:11:17.567835   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:11:17.583925   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:11:17.583976   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:11:17.583988   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:11:17.583999   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:11:17.584015   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:11:17.584027   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:11:17.584041   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:11:17.584052   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:11:17.584060   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:11:17.584068   74141 system_pods.go:74] duration metric: took 16.206948ms to wait for pod list to return data ...
	I1105 19:11:17.584081   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:11:17.593935   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:11:17.593960   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:11:17.593971   74141 node_conditions.go:105] duration metric: took 9.883295ms to run NodePressure ...
	I1105 19:11:17.593988   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:17.929181   74141 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933853   74141 kubeadm.go:739] kubelet initialised
	I1105 19:11:17.933879   74141 kubeadm.go:740] duration metric: took 4.667992ms waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933888   74141 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:17.940560   74141 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.952799   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952832   74141 pod_ready.go:82] duration metric: took 12.240861ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.952845   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952856   74141 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.959079   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959105   74141 pod_ready.go:82] duration metric: took 6.23649ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.959119   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959130   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.963797   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963817   74141 pod_ready.go:82] duration metric: took 4.681011ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.963830   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963837   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.970915   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970935   74141 pod_ready.go:82] duration metric: took 7.091116ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.970945   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970951   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.371478   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371503   74141 pod_ready.go:82] duration metric: took 400.5454ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.371512   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371519   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.771731   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771768   74141 pod_ready.go:82] duration metric: took 400.239012ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.771783   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771792   74141 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:19.171239   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171271   74141 pod_ready.go:82] duration metric: took 399.46983ms for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:19.171286   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171296   74141 pod_ready.go:39] duration metric: took 1.237397637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:19.171315   74141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:11:19.185845   74141 ops.go:34] apiserver oom_adj: -16
	I1105 19:11:19.185869   74141 kubeadm.go:597] duration metric: took 9.745385943s to restartPrimaryControlPlane
	I1105 19:11:19.185880   74141 kubeadm.go:394] duration metric: took 9.797958845s to StartCluster
	I1105 19:11:19.185901   74141 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.185989   74141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:19.187722   74141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.187971   74141 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:11:19.188036   74141 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:11:19.188142   74141 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188160   74141 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-608095"
	I1105 19:11:19.188159   74141 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-608095"
	W1105 19:11:19.188171   74141 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:11:19.188199   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188236   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:19.188248   74141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-608095"
	I1105 19:11:19.188273   74141 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188310   74141 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.188323   74141 addons.go:243] addon metrics-server should already be in state true
	I1105 19:11:19.188379   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188526   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188569   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188674   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188725   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188802   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188823   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.189792   74141 out.go:177] * Verifying Kubernetes components...
	I1105 19:11:19.191119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:19.203875   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I1105 19:11:19.204313   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.204803   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.204830   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.205083   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I1105 19:11:19.205175   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.205432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.205488   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.205973   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.205999   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.206357   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.206916   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.206955   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.207292   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I1105 19:11:19.207671   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.208122   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.208146   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.208484   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.208861   74141 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.208882   74141 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:11:19.208909   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.209004   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209045   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.209234   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209273   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.223963   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I1105 19:11:19.224405   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.225044   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.225074   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.225460   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.226141   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I1105 19:11:19.226463   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.226509   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.226577   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.226757   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I1105 19:11:19.227058   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.227081   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.227475   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.227558   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.227797   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.228116   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.228136   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.228530   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.228755   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.229870   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.230471   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.232239   74141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:19.232263   74141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:11:19.233508   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:11:19.233527   74141 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:11:19.233548   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.233607   74141 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.233626   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:11:19.233647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.237337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237365   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237895   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237928   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237958   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237972   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.238155   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238270   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238440   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238623   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238681   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.239040   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.243685   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1105 19:11:19.244073   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.244584   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.244602   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.244951   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.245112   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.246617   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.246814   74141 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.246830   74141 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:11:19.246845   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.249467   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.249896   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.249925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.250139   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.250317   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.250466   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.250636   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.396917   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:19.412224   74141 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:19.541493   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.566934   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:11:19.566982   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:11:19.567627   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.607685   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:11:19.607717   74141 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:11:19.640921   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:19.640959   74141 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:11:19.674550   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:20.091222   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091248   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091528   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091583   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091596   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091605   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091807   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091868   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091853   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.105073   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.105093   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.105426   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.105442   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719139   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.151476995s)
	I1105 19:11:20.719187   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719194   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.044605505s)
	I1105 19:11:20.719236   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719256   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719511   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719582   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719593   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719596   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719631   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719580   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719643   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719654   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719670   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719680   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719897   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719946   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719948   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719903   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719982   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719990   74141 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-608095"
	I1105 19:11:20.719927   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.721843   74141 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1105 19:11:22.583507   73496 start.go:364] duration metric: took 54.335724939s to acquireMachinesLock for "no-preload-459223"
	I1105 19:11:22.583581   73496 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:22.583590   73496 fix.go:54] fixHost starting: 
	I1105 19:11:22.584018   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:22.584054   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:22.603921   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I1105 19:11:22.604367   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:22.604825   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:11:22.604845   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:22.605233   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:22.605408   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:22.605534   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:11:22.607289   73496 fix.go:112] recreateIfNeeded on no-preload-459223: state=Stopped err=<nil>
	I1105 19:11:22.607314   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	W1105 19:11:22.607458   73496 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:22.609455   73496 out.go:177] * Restarting existing kvm2 VM for "no-preload-459223" ...
	I1105 19:11:18.357643   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:18.358065   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:18.358099   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:18.358009   75309 retry.go:31] will retry after 3.036834524s: waiting for machine to come up
	I1105 19:11:21.398221   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398763   74485 main.go:141] libmachine: (old-k8s-version-567666) Found IP for machine: 192.168.61.125
	I1105 19:11:21.398825   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has current primary IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398843   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserving static IP address...
	I1105 19:11:21.399327   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.399350   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserved static IP address: 192.168.61.125
	I1105 19:11:21.399365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | skip adding static IP to network mk-old-k8s-version-567666 - found existing host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"}
	I1105 19:11:21.399379   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Getting to WaitForSSH function...
	I1105 19:11:21.399394   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting for SSH to be available...
	I1105 19:11:21.401270   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401664   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.401691   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401866   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH client type: external
	I1105 19:11:21.401897   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa (-rw-------)
	I1105 19:11:21.401935   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:21.401949   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | About to run SSH command:
	I1105 19:11:21.401959   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | exit 0
	I1105 19:11:21.527815   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:21.528165   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:11:21.528874   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.531373   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531647   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.531672   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531876   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:11:21.532071   74485 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:21.532092   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:21.532332   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.534177   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534431   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.534465   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534556   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.534716   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534845   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534960   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.535142   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.535329   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.535341   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:21.643321   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:21.643354   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643618   74485 buildroot.go:166] provisioning hostname "old-k8s-version-567666"
	I1105 19:11:21.643646   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643812   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.646230   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646628   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.646666   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.647037   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647167   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647290   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.647421   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.647579   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.647592   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-567666 && echo "old-k8s-version-567666" | sudo tee /etc/hostname
	I1105 19:11:21.770209   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-567666
	
	I1105 19:11:21.770255   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.772932   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773314   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.773346   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773484   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.773691   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773950   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.774121   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.774357   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.774386   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-567666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-567666/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-567666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:21.890834   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:21.890860   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:21.890915   74485 buildroot.go:174] setting up certificates
	I1105 19:11:21.890929   74485 provision.go:84] configureAuth start
	I1105 19:11:21.890944   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.891224   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.893835   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894256   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.894285   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.896436   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896699   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.896715   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896893   74485 provision.go:143] copyHostCerts
	I1105 19:11:21.896951   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:21.896967   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:21.897037   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:21.897163   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:21.897176   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:21.897205   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:21.897279   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:21.897289   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:21.897315   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:21.897396   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-567666 san=[127.0.0.1 192.168.61.125 localhost minikube old-k8s-version-567666]
	I1105 19:11:21.962153   74485 provision.go:177] copyRemoteCerts
	I1105 19:11:21.962219   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:21.962257   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.964765   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965125   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.965166   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965330   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.965478   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.965603   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.965746   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.048519   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:22.072975   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1105 19:11:22.098263   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:22.120258   74485 provision.go:87] duration metric: took 229.316972ms to configureAuth
	I1105 19:11:22.120285   74485 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:22.120444   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:11:22.120516   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.123859   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124309   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.124344   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124536   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.124737   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.124922   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.125055   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.125213   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.125375   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.125388   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:22.349922   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:22.349964   74485 machine.go:96] duration metric: took 817.87332ms to provisionDockerMachine
	I1105 19:11:22.349979   74485 start.go:293] postStartSetup for "old-k8s-version-567666" (driver="kvm2")
	I1105 19:11:22.349992   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:22.350014   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.350350   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:22.350385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.352922   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353310   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.353332   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353459   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.353638   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.353807   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.353921   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.437482   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:22.441617   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:22.441646   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:22.441711   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:22.441807   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:22.441929   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:22.451016   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:22.474199   74485 start.go:296] duration metric: took 124.207336ms for postStartSetup
	I1105 19:11:22.474233   74485 fix.go:56] duration metric: took 18.810197154s for fixHost
	I1105 19:11:22.474269   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.476786   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477119   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.477157   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477279   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.477471   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477621   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477753   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.477910   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.478070   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.478081   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:22.583343   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833882.558222038
	
	I1105 19:11:22.583363   74485 fix.go:216] guest clock: 1730833882.558222038
	I1105 19:11:22.583372   74485 fix.go:229] Guest: 2024-11-05 19:11:22.558222038 +0000 UTC Remote: 2024-11-05 19:11:22.474236871 +0000 UTC m=+209.862783450 (delta=83.985167ms)
	I1105 19:11:22.583418   74485 fix.go:200] guest clock delta is within tolerance: 83.985167ms
	I1105 19:11:22.583429   74485 start.go:83] releasing machines lock for "old-k8s-version-567666", held for 18.919444623s
	I1105 19:11:22.583460   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.583717   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:22.586183   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586479   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.586509   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586687   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587137   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587310   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587400   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:22.587448   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.587521   74485 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:22.587548   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.590145   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590474   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.590507   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590530   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590655   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.590831   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.590995   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.591010   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591037   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.591179   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.591286   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.591438   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.591558   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591702   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:19.461723   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:21.962582   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:22.702707   74485 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:22.708965   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:22.856764   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:22.863791   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:22.863866   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:22.883997   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:22.884022   74485 start.go:495] detecting cgroup driver to use...
	I1105 19:11:22.884094   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:22.901499   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:22.919358   74485 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:22.919422   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:22.936964   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:22.953538   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:23.077720   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:23.218316   74485 docker.go:233] disabling docker service ...
	I1105 19:11:23.218390   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:23.238316   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:23.251814   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:23.427386   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:23.552928   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:23.567149   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:23.587241   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1105 19:11:23.587307   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.597558   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:23.597620   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.607466   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.616794   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.626425   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:23.637121   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:23.649243   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:23.649305   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:23.664648   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:23.675060   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:23.812636   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:23.903326   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:23.903404   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:23.908377   74485 start.go:563] Will wait 60s for crictl version
	I1105 19:11:23.908434   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:23.912163   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:23.961712   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:23.961794   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:23.992951   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:24.032041   74485 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1105 19:11:20.723316   74141 addons.go:510] duration metric: took 1.53528546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1105 19:11:21.416385   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:23.416458   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:22.610737   73496 main.go:141] libmachine: (no-preload-459223) Calling .Start
	I1105 19:11:22.610910   73496 main.go:141] libmachine: (no-preload-459223) Ensuring networks are active...
	I1105 19:11:22.611680   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network default is active
	I1105 19:11:22.612057   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network mk-no-preload-459223 is active
	I1105 19:11:22.612426   73496 main.go:141] libmachine: (no-preload-459223) Getting domain xml...
	I1105 19:11:22.613081   73496 main.go:141] libmachine: (no-preload-459223) Creating domain...
	I1105 19:11:24.013821   73496 main.go:141] libmachine: (no-preload-459223) Waiting to get IP...
	I1105 19:11:24.014922   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.015467   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.015561   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.015439   75501 retry.go:31] will retry after 233.461829ms: waiting for machine to come up
	I1105 19:11:24.251339   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.252673   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.252799   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.252760   75501 retry.go:31] will retry after 276.401207ms: waiting for machine to come up
	I1105 19:11:24.531408   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.531964   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.531987   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.531909   75501 retry.go:31] will retry after 367.69826ms: waiting for machine to come up
	I1105 19:11:24.901179   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.901579   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.901608   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.901536   75501 retry.go:31] will retry after 602.654501ms: waiting for machine to come up
	I1105 19:11:25.505889   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:25.506403   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:25.506426   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:25.506364   75501 retry.go:31] will retry after 492.077165ms: waiting for machine to come up
	I1105 19:11:24.033400   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:24.036549   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037128   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:24.037165   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037346   74485 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:24.042641   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:24.055174   74485 kubeadm.go:883] updating cluster {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:24.055327   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:11:24.055388   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:24.101655   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:24.101724   74485 ssh_runner.go:195] Run: which lz4
	I1105 19:11:24.105618   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:24.109705   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:24.109735   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1105 19:11:25.602158   74485 crio.go:462] duration metric: took 1.496564307s to copy over tarball
	I1105 19:11:25.602236   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:23.963218   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:26.461963   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:25.419351   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:26.916693   74141 node_ready.go:49] node "default-k8s-diff-port-608095" has status "Ready":"True"
	I1105 19:11:26.916731   74141 node_ready.go:38] duration metric: took 7.50447744s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:26.916744   74141 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:26.922179   74141 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927845   74141 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.927879   74141 pod_ready.go:82] duration metric: took 5.666725ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927892   74141 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932723   74141 pod_ready.go:93] pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.932752   74141 pod_ready.go:82] duration metric: took 4.843531ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932761   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937108   74141 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.937137   74141 pod_ready.go:82] duration metric: took 4.368536ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937152   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.941970   74141 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.941995   74141 pod_ready.go:82] duration metric: took 4.833418ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.942008   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317480   74141 pod_ready.go:93] pod "kube-proxy-8v42c" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.317505   74141 pod_ready.go:82] duration metric: took 375.489077ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317517   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717923   74141 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.717945   74141 pod_ready.go:82] duration metric: took 400.42059ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717956   74141 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.000041   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.000558   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.000613   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.000525   75501 retry.go:31] will retry after 920.198126ms: waiting for machine to come up
	I1105 19:11:26.922134   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.922917   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.922951   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.922858   75501 retry.go:31] will retry after 1.071853506s: waiting for machine to come up
	I1105 19:11:27.996574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:27.996995   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:27.997020   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:27.996949   75501 retry.go:31] will retry after 1.283200825s: waiting for machine to come up
	I1105 19:11:29.282457   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:29.282942   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:29.282979   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:29.282903   75501 retry.go:31] will retry after 1.512809658s: waiting for machine to come up
	I1105 19:11:28.701223   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.098952901s)
	I1105 19:11:28.701253   74485 crio.go:469] duration metric: took 3.099065633s to extract the tarball
	I1105 19:11:28.701263   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:28.744214   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:28.778845   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:28.778868   74485 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:28.778962   74485 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:28.778945   74485 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.779024   74485 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.779039   74485 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.778939   74485 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.779067   74485 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.779083   74485 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.778957   74485 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781024   74485 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781003   74485 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.781052   74485 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.781002   74485 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.781088   74485 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.781114   74485 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.013637   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1105 19:11:29.043928   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.043936   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.044140   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.045892   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.046313   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.055792   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.081724   74485 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1105 19:11:29.081779   74485 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1105 19:11:29.081826   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.234925   74485 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1105 19:11:29.234966   74485 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.235046   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235079   74485 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1105 19:11:29.235112   74485 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.235136   74485 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1105 19:11:29.235152   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235167   74485 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.235200   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235238   74485 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1105 19:11:29.235277   74485 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.235298   74485 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1105 19:11:29.235320   74485 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.235333   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235352   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235351   74485 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1105 19:11:29.235385   74485 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.235415   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235426   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.251873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.251960   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.251985   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.252000   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.371298   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.415548   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.415592   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.415654   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.415710   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.415791   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.415868   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.466873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.544593   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.544660   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.586695   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.586714   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.586812   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.586916   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.606582   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1105 19:11:29.707767   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1105 19:11:29.707803   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1105 19:11:29.716195   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1105 19:11:29.723097   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1105 19:11:30.039971   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:30.182760   74485 cache_images.go:92] duration metric: took 1.403874987s to LoadCachedImages
	W1105 19:11:30.182890   74485 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1105 19:11:30.182912   74485 kubeadm.go:934] updating node { 192.168.61.125 8443 v1.20.0 crio true true} ...
	I1105 19:11:30.183052   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-567666 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:30.183146   74485 ssh_runner.go:195] Run: crio config
	I1105 19:11:30.235206   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:11:30.235241   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:30.235253   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:30.235277   74485 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-567666 NodeName:old-k8s-version-567666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1105 19:11:30.235433   74485 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-567666"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:30.235503   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1105 19:11:30.245189   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:30.245263   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:30.254772   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1105 19:11:30.271711   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:30.288568   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1105 19:11:30.309098   74485 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:30.313211   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:30.325637   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:30.447346   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:30.466863   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666 for IP: 192.168.61.125
	I1105 19:11:30.466884   74485 certs.go:194] generating shared ca certs ...
	I1105 19:11:30.466898   74485 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:30.467086   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:30.467152   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:30.467165   74485 certs.go:256] generating profile certs ...
	I1105 19:11:30.467322   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key
	I1105 19:11:30.467398   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8
	I1105 19:11:30.467448   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key
	I1105 19:11:30.467614   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:30.467656   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:30.467676   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:30.467722   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:30.467759   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:30.467788   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:30.467847   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:30.468756   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:30.532325   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:30.559936   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:30.592995   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:30.632421   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 19:11:30.662285   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:11:30.696292   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:30.725642   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:30.750231   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:30.773213   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:30.796269   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:30.820261   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:30.837059   74485 ssh_runner.go:195] Run: openssl version
	I1105 19:11:30.842937   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:30.855033   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859637   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859720   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.865747   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:30.877678   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:30.890762   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895576   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895642   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.901686   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:30.912689   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:30.923800   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928911   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928984   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.934782   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:30.947059   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:30.951934   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:30.958065   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:30.965341   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:30.971725   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:30.977606   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:30.983486   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:30.989212   74485 kubeadm.go:392] StartCluster: {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:30.989350   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:30.989411   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.031794   74485 cri.go:89] found id: ""
	I1105 19:11:31.031884   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:31.043178   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:31.043202   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:31.043291   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:31.054102   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:31.055256   74485 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:31.055924   74485 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-567666" cluster setting kubeconfig missing "old-k8s-version-567666" context setting]
	I1105 19:11:31.056913   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:31.064220   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:31.074582   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.125
	I1105 19:11:31.074618   74485 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:31.074628   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:31.074706   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.111157   74485 cri.go:89] found id: ""
	I1105 19:11:31.111241   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:31.130027   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:31.139917   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:31.139939   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:31.140007   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:31.150790   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:31.150868   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:31.161397   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:31.170394   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:31.170462   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:31.179594   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.188892   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:31.188952   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.199840   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:31.209166   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:31.209244   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:31.219687   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:31.231079   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:31.350667   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.094565   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.334807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.457538   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.534503   74485 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:32.534596   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:28.464017   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.962422   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:29.725325   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:32.225372   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.796963   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:30.797438   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:30.797489   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:30.797407   75501 retry.go:31] will retry after 1.774832047s: waiting for machine to come up
	I1105 19:11:32.574423   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:32.575000   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:32.575047   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:32.574929   75501 retry.go:31] will retry after 2.041093372s: waiting for machine to come up
	I1105 19:11:34.618469   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:34.618954   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:34.619015   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:34.618915   75501 retry.go:31] will retry after 2.731949113s: waiting for machine to come up
	I1105 19:11:33.034690   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:33.535594   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.035526   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.534836   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.034947   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.535108   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.035417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.535438   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.034766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.535415   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:32.962469   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.963093   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.461010   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.724484   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.224511   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.352209   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:37.352752   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:37.352783   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:37.352686   75501 retry.go:31] will retry after 3.62202055s: waiting for machine to come up
	I1105 19:11:38.035553   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:38.534702   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.035332   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.534749   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.034989   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.535354   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.035624   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.534847   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.035293   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.535363   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.465635   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:41.961348   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:40.978791   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979231   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has current primary IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979249   73496 main.go:141] libmachine: (no-preload-459223) Found IP for machine: 192.168.72.101
	I1105 19:11:40.979258   73496 main.go:141] libmachine: (no-preload-459223) Reserving static IP address...
	I1105 19:11:40.979621   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.979650   73496 main.go:141] libmachine: (no-preload-459223) Reserved static IP address: 192.168.72.101
	I1105 19:11:40.979669   73496 main.go:141] libmachine: (no-preload-459223) DBG | skip adding static IP to network mk-no-preload-459223 - found existing host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"}
	I1105 19:11:40.979682   73496 main.go:141] libmachine: (no-preload-459223) Waiting for SSH to be available...
	I1105 19:11:40.979710   73496 main.go:141] libmachine: (no-preload-459223) DBG | Getting to WaitForSSH function...
	I1105 19:11:40.981725   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.982063   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982202   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH client type: external
	I1105 19:11:40.982227   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa (-rw-------)
	I1105 19:11:40.982258   73496 main.go:141] libmachine: (no-preload-459223) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:40.982286   73496 main.go:141] libmachine: (no-preload-459223) DBG | About to run SSH command:
	I1105 19:11:40.982310   73496 main.go:141] libmachine: (no-preload-459223) DBG | exit 0
	I1105 19:11:41.111259   73496 main.go:141] libmachine: (no-preload-459223) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:41.111639   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetConfigRaw
	I1105 19:11:41.112368   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.114811   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115215   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.115244   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115499   73496 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/config.json ...
	I1105 19:11:41.115687   73496 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:41.115705   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:41.115900   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.118059   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118481   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.118505   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118659   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.118833   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.118959   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.119078   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.119222   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.119426   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.119442   73496 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:41.235030   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:41.235060   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235270   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:11:41.235294   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235480   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.237980   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238288   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.238327   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238405   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.238567   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238687   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238805   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.238938   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.239150   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.239163   73496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-459223 && echo "no-preload-459223" | sudo tee /etc/hostname
	I1105 19:11:41.366664   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-459223
	
	I1105 19:11:41.366693   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.369672   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.369979   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.370006   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.370147   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.370335   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370661   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.370830   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.371067   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.371086   73496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-459223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-459223/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-459223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:41.495741   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:41.495774   73496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:41.495796   73496 buildroot.go:174] setting up certificates
	I1105 19:11:41.495804   73496 provision.go:84] configureAuth start
	I1105 19:11:41.495816   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.496076   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.498948   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499377   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.499409   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499552   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.501842   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502168   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.502198   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502367   73496 provision.go:143] copyHostCerts
	I1105 19:11:41.502428   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:41.502445   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:41.502516   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:41.502662   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:41.502674   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:41.502706   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:41.502814   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:41.502825   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:41.502853   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:41.502934   73496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.no-preload-459223 san=[127.0.0.1 192.168.72.101 localhost minikube no-preload-459223]
	I1105 19:11:41.648058   73496 provision.go:177] copyRemoteCerts
	I1105 19:11:41.648115   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:41.648137   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.650915   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651274   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.651306   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.651707   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.651878   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.652032   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:41.736549   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1105 19:11:41.759352   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:41.782205   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:41.804725   73496 provision.go:87] duration metric: took 308.906806ms to configureAuth
	I1105 19:11:41.804755   73496 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:41.804930   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:41.805011   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.807634   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.808071   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.808498   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808657   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808792   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.808960   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.809113   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.809125   73496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:42.033406   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:42.033449   73496 machine.go:96] duration metric: took 917.749182ms to provisionDockerMachine
	I1105 19:11:42.033462   73496 start.go:293] postStartSetup for "no-preload-459223" (driver="kvm2")
	I1105 19:11:42.033475   73496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:42.033506   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.033853   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:42.033883   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.037259   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037688   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.037722   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037869   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.038063   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.038231   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.038361   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.126624   73496 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:42.130761   73496 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:42.130794   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:42.130881   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:42.131006   73496 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:42.131120   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:42.140978   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:42.163880   73496 start.go:296] duration metric: took 130.405487ms for postStartSetup
	I1105 19:11:42.163933   73496 fix.go:56] duration metric: took 19.580327925s for fixHost
	I1105 19:11:42.163953   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.166648   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.166994   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.167025   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.167196   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.167394   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167565   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167705   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.167856   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:42.168016   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:42.168025   73496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:42.279303   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833902.251467447
	
	I1105 19:11:42.279336   73496 fix.go:216] guest clock: 1730833902.251467447
	I1105 19:11:42.279351   73496 fix.go:229] Guest: 2024-11-05 19:11:42.251467447 +0000 UTC Remote: 2024-11-05 19:11:42.163937292 +0000 UTC m=+356.505256250 (delta=87.530155ms)
	I1105 19:11:42.279378   73496 fix.go:200] guest clock delta is within tolerance: 87.530155ms
	I1105 19:11:42.279387   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 19.695831159s
	I1105 19:11:42.279417   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.279660   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:42.282462   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.282828   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.282871   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.283018   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283439   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283580   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283669   73496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:42.283716   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.283811   73496 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:42.283838   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.286528   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286754   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286891   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.286917   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287097   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.287112   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287124   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287313   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287495   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287510   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287666   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287664   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.287769   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.398511   73496 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:42.404337   73496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:42.550196   73496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:42.555775   73496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:42.555853   73496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:42.571003   73496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:42.571031   73496 start.go:495] detecting cgroup driver to use...
	I1105 19:11:42.571123   73496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:42.586390   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:42.599887   73496 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:42.599944   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:42.613260   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:42.626371   73496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:42.736949   73496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:42.898897   73496 docker.go:233] disabling docker service ...
	I1105 19:11:42.898965   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:42.912534   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:42.925075   73496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:43.043425   73496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:43.175468   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:43.190803   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:43.210413   73496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:43.210496   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.221971   73496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:43.222064   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.232251   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.241540   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.251131   73496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:43.261218   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.270932   73496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.287905   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.297730   73496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:43.307263   73496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:43.307319   73496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:43.319421   73496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:43.328415   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:43.445798   73496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:43.532190   73496 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:43.532284   73496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:43.536931   73496 start.go:563] Will wait 60s for crictl version
	I1105 19:11:43.536986   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.540525   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:43.576428   73496 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:43.576540   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.603034   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.631229   73496 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:39.724162   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:42.224141   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:44.224609   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:43.632482   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:43.634912   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635227   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:43.635260   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635530   73496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:43.639287   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:43.650818   73496 kubeadm.go:883] updating cluster {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:43.650963   73496 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:43.651042   73496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:43.685392   73496 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:43.685421   73496 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:43.685492   73496 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.685500   73496 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.685517   73496 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.685547   73496 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.685506   73496 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.685569   73496 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.685558   73496 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.685623   73496 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.686958   73496 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.686979   73496 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.686976   73496 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.687017   73496 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.687030   73496 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.687057   73496 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1105 19:11:43.898928   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.914069   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1105 19:11:43.934388   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.940664   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.947392   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.951614   73496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1105 19:11:43.951652   73496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.951686   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.957000   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.045057   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.075256   73496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1105 19:11:44.075289   73496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1105 19:11:44.075304   73496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.075310   73496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075357   73496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1105 19:11:44.075388   73496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075417   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.075481   73496 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1105 19:11:44.075431   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075511   73496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.075543   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.102803   73496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1105 19:11:44.102856   73496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.102916   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.133582   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.133640   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.133655   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.133707   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.188042   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.188058   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.272464   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.272500   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.272467   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.272531   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.289003   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.289126   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.411162   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1105 19:11:44.411248   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.411307   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1105 19:11:44.411326   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:44.411361   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1105 19:11:44.411394   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:44.411432   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478064   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1105 19:11:44.478093   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478132   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1105 19:11:44.478152   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478178   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1105 19:11:44.478195   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1105 19:11:44.478211   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1105 19:11:44.478226   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:44.478249   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1105 19:11:44.478257   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:44.478324   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:44.889847   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.035199   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.534769   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.035551   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.535664   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.035103   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.535581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.035077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.535660   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.035462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.534898   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.962742   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.462884   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.724058   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:48.727054   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.976315   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.498135546s)
	I1105 19:11:46.976348   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1105 19:11:46.976361   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.498084867s)
	I1105 19:11:46.976386   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.498096252s)
	I1105 19:11:46.976392   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.498054417s)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1105 19:11:46.976395   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1105 19:11:46.976368   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976436   73496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.086553002s)
	I1105 19:11:46.976471   73496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1105 19:11:46.976488   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976506   73496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:46.976551   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:49.054369   73496 ssh_runner.go:235] Completed: which crictl: (2.077794607s)
	I1105 19:11:49.054455   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:49.054480   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.077976168s)
	I1105 19:11:49.054497   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1105 19:11:49.054520   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.054551   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.089648   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.509600   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455021031s)
	I1105 19:11:50.509639   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1105 19:11:50.509664   73496 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509679   73496 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.419997127s)
	I1105 19:11:50.509719   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509751   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.547301   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1105 19:11:50.547416   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:48.035320   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.535496   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.035636   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.535445   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.035499   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.535722   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.035700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.535310   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.035585   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.535468   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.962134   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.463479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.225155   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:53.723881   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:54.139987   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.592545704s)
	I1105 19:11:54.140021   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1105 19:11:54.140038   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.630297093s)
	I1105 19:11:54.140058   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1105 19:11:54.140089   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:54.140150   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:53.034919   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.535697   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.035353   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.534669   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.034957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.534747   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.035331   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.534699   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.465549   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.961291   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.725153   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:58.224417   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.887208   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.747032149s)
	I1105 19:11:55.887247   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1105 19:11:55.887278   73496 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:55.887331   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:57.753834   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.866475995s)
	I1105 19:11:57.753860   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1105 19:11:57.753879   73496 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:57.753917   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:58.605444   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1105 19:11:58.605490   73496 cache_images.go:123] Successfully loaded all cached images
	I1105 19:11:58.605498   73496 cache_images.go:92] duration metric: took 14.920064519s to LoadCachedImages
	I1105 19:11:58.605512   73496 kubeadm.go:934] updating node { 192.168.72.101 8443 v1.31.2 crio true true} ...
	I1105 19:11:58.605627   73496 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-459223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:58.605719   73496 ssh_runner.go:195] Run: crio config
	I1105 19:11:58.654396   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:11:58.654422   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:58.654432   73496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:58.654456   73496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.101 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-459223 NodeName:no-preload-459223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:58.654636   73496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-459223"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.101"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.101"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:58.654714   73496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:58.666580   73496 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:58.666659   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:58.676390   73496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:11:58.692426   73496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:58.708650   73496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1105 19:11:58.727451   73496 ssh_runner.go:195] Run: grep 192.168.72.101	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:58.731200   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:58.743437   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:58.850614   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:58.867662   73496 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223 for IP: 192.168.72.101
	I1105 19:11:58.867694   73496 certs.go:194] generating shared ca certs ...
	I1105 19:11:58.867715   73496 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:58.867896   73496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:58.867954   73496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:58.867988   73496 certs.go:256] generating profile certs ...
	I1105 19:11:58.868073   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/client.key
	I1105 19:11:58.868129   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key.0f61fe1e
	I1105 19:11:58.868163   73496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key
	I1105 19:11:58.868276   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:58.868316   73496 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:58.868323   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:58.868347   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:58.868380   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:58.868409   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:58.868450   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:58.869179   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:58.911433   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:58.947863   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:58.977511   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:59.022637   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 19:11:59.060992   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:59.086516   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:59.109616   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:59.135019   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:59.159832   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:59.184470   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:59.207138   73496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:59.224379   73496 ssh_runner.go:195] Run: openssl version
	I1105 19:11:59.230142   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:59.243624   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248086   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248157   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.253684   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:59.264169   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:59.274837   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279102   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279159   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.284540   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:59.295198   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:59.306105   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310073   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310115   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.315240   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:59.325470   73496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:59.329485   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:59.334985   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:59.340316   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:59.345717   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:59.351082   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:59.356631   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:59.361951   73496 kubeadm.go:392] StartCluster: {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:59.362047   73496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:59.362084   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.398746   73496 cri.go:89] found id: ""
	I1105 19:11:59.398819   73496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:59.408597   73496 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:59.408614   73496 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:59.408656   73496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:59.418082   73496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:59.419128   73496 kubeconfig.go:125] found "no-preload-459223" server: "https://192.168.72.101:8443"
	I1105 19:11:59.421286   73496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:59.430458   73496 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.101
	I1105 19:11:59.430490   73496 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:59.430500   73496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:59.430549   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.464047   73496 cri.go:89] found id: ""
	I1105 19:11:59.464102   73496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:59.480978   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:59.490808   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:59.490829   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:59.490871   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:59.499505   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:59.499559   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:59.508247   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:59.516942   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:59.517005   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:59.525910   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.534349   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:59.534392   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.544212   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:59.553794   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:59.553857   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:59.562739   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:59.571819   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:59.680938   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.564659   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:58.034948   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:58.534748   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.034961   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.535634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.035311   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.534756   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.035266   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.535256   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.035489   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.534701   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.963075   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.462112   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.224544   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:02.225623   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.226711   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.775338   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.844402   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.957534   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:12:00.957630   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.458375   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.958215   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.975834   73496 api_server.go:72] duration metric: took 1.018298528s to wait for apiserver process to appear ...
	I1105 19:12:01.975862   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:12:01.975884   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.774116   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.774149   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.774164   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.825378   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.825427   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.976663   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.984209   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:04.984244   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.476825   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.484608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.484644   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.975985   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.981608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.981639   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:06.476014   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:06.480296   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:12:06.487584   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:12:06.487613   73496 api_server.go:131] duration metric: took 4.511744097s to wait for apiserver health ...
	I1105 19:12:06.487623   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:12:06.487632   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:12:06.489302   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:12:03.034795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:03.534764   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.034833   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.534795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.034815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.534885   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.535327   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.035253   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.535011   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.961693   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.962003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:07.461125   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.724362   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:09.224191   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.490496   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:12:06.500809   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:12:06.529242   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:12:06.542769   73496 system_pods.go:59] 8 kube-system pods found
	I1105 19:12:06.542806   73496 system_pods.go:61] "coredns-7c65d6cfc9-9vvhj" [fde1a6e7-6807-440c-a38d-4f39ede6c11e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:12:06.542818   73496 system_pods.go:61] "etcd-no-preload-459223" [398e3fc3-6902-4cbb-bc50-a72bab461839] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:12:06.542828   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [33a306b0-a41d-4ca3-9d01-69faa7825fe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:12:06.542837   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [865ae24c-d991-4650-9e17-7242f84403e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:12:06.542844   73496 system_pods.go:61] "kube-proxy-6h584" [dd35774f-a245-42af-8fe9-bd6933ad0e30] Running
	I1105 19:12:06.542852   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [27d3685e-d548-49b6-a24d-02b1f8656c66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:12:06.542859   73496 system_pods.go:61] "metrics-server-6867b74b74-5sp2j" [7ddaa66e-b4ba-4241-8dba-5fc6ab66d777] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:12:06.542864   73496 system_pods.go:61] "storage-provisioner" [49786ba3-e9fc-45ad-9418-fd3a0a7b652c] Running
	I1105 19:12:06.542873   73496 system_pods.go:74] duration metric: took 13.603868ms to wait for pod list to return data ...
	I1105 19:12:06.542883   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:12:06.549398   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:12:06.549425   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:12:06.549435   73496 node_conditions.go:105] duration metric: took 6.546615ms to run NodePressure ...
	I1105 19:12:06.549452   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:06.812829   73496 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818052   73496 kubeadm.go:739] kubelet initialised
	I1105 19:12:06.818082   73496 kubeadm.go:740] duration metric: took 5.227942ms waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818093   73496 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:12:06.823883   73496 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.830129   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830164   73496 pod_ready.go:82] duration metric: took 6.253499ms for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.830176   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830187   73496 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.834901   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834942   73496 pod_ready.go:82] duration metric: took 4.743456ms for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.834954   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834988   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.841446   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841474   73496 pod_ready.go:82] duration metric: took 6.472942ms for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.841485   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841494   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.933972   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.933998   73496 pod_ready.go:82] duration metric: took 92.493084ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.934006   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.934012   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333443   73496 pod_ready.go:93] pod "kube-proxy-6h584" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:07.333473   73496 pod_ready.go:82] duration metric: took 399.45278ms for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333486   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:09.339907   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:08.035104   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:08.534784   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.035198   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.535319   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.035258   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.534634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.035604   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.535077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.035096   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.961614   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.962113   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.724418   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.724954   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.839467   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.839725   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.035100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:13.534793   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.035120   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.535318   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.035062   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.535127   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.034840   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.534830   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.035105   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.534928   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.961398   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.224300   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.729666   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.339542   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:17.840399   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:17.840424   73496 pod_ready.go:82] duration metric: took 10.506929493s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:17.840433   73496 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:19.846676   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.035126   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:18.535446   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.035154   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.535413   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.035580   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.534802   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.035030   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.535250   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.034785   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.534700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.460480   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.461609   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.223496   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.224908   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.847279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:24.347279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.034721   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.534672   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.035358   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.534813   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.535342   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.034934   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.534766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.035389   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.534831   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.961556   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.460682   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:25.723807   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:27.724515   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.346351   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:28.035226   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:28.535577   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.034984   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.535633   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.035509   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.534907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.535421   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.034719   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.534952   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:32.535067   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:32.575052   74485 cri.go:89] found id: ""
	I1105 19:12:32.575085   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.575096   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:32.575104   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:32.575164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:32.609969   74485 cri.go:89] found id: ""
	I1105 19:12:32.610003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.610011   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:32.610017   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:32.610065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:32.642343   74485 cri.go:89] found id: ""
	I1105 19:12:32.642369   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.642376   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:32.642381   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:32.642426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:28.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:30.960340   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.725101   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.224788   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:31.346559   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:33.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.680144   74485 cri.go:89] found id: ""
	I1105 19:12:32.680177   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.680188   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:32.680196   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:32.680270   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:32.715216   74485 cri.go:89] found id: ""
	I1105 19:12:32.715248   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.715259   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:32.715267   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:32.715321   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:32.751742   74485 cri.go:89] found id: ""
	I1105 19:12:32.751771   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.751795   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:32.751803   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:32.751865   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:32.786944   74485 cri.go:89] found id: ""
	I1105 19:12:32.787003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.787015   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:32.787023   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:32.787080   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:32.820523   74485 cri.go:89] found id: ""
	I1105 19:12:32.820550   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.820557   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:32.820565   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:32.820575   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:32.873960   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:32.874000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:32.889268   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:32.889296   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:33.011825   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:33.011846   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:33.011862   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:33.082785   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:33.082827   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:35.630678   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:35.644410   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:35.644492   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:35.679567   74485 cri.go:89] found id: ""
	I1105 19:12:35.679598   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.679607   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:35.679613   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:35.679666   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:35.713685   74485 cri.go:89] found id: ""
	I1105 19:12:35.713713   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.713721   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:35.713726   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:35.713789   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:35.749496   74485 cri.go:89] found id: ""
	I1105 19:12:35.749525   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.749536   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:35.749543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:35.749611   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:35.784228   74485 cri.go:89] found id: ""
	I1105 19:12:35.784254   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.784263   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:35.784269   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:35.784317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:35.818620   74485 cri.go:89] found id: ""
	I1105 19:12:35.818680   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.818696   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:35.818703   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:35.818769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:35.852525   74485 cri.go:89] found id: ""
	I1105 19:12:35.852554   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.852566   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:35.852574   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:35.852648   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:35.887906   74485 cri.go:89] found id: ""
	I1105 19:12:35.887931   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.887939   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:35.887944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:35.887994   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:35.920566   74485 cri.go:89] found id: ""
	I1105 19:12:35.920594   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.920602   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:35.920612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:35.920627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:35.972706   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:35.972742   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:35.986114   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:35.986141   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:36.067016   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:36.067044   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:36.067060   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:36.158947   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:36.159003   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:32.962679   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.461449   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:37.462001   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:34.724028   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:36.724174   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.728373   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.848563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.347478   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:40.347899   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.700738   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:38.713280   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:38.713351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:38.747293   74485 cri.go:89] found id: ""
	I1105 19:12:38.747335   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.747347   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:38.747355   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:38.747414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:38.781607   74485 cri.go:89] found id: ""
	I1105 19:12:38.781635   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.781643   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:38.781648   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:38.781703   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:38.815303   74485 cri.go:89] found id: ""
	I1105 19:12:38.815333   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.815342   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:38.815348   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:38.815397   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:38.850128   74485 cri.go:89] found id: ""
	I1105 19:12:38.850156   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.850166   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:38.850174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:38.850233   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:38.882470   74485 cri.go:89] found id: ""
	I1105 19:12:38.882493   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.882500   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:38.882506   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:38.882563   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:38.914669   74485 cri.go:89] found id: ""
	I1105 19:12:38.914698   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.914706   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:38.914713   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:38.914762   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:38.946521   74485 cri.go:89] found id: ""
	I1105 19:12:38.946548   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.946556   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:38.946561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:38.946613   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:38.979628   74485 cri.go:89] found id: ""
	I1105 19:12:38.979655   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.979663   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:38.979672   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:38.979682   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:39.056066   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:39.056102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.092303   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:39.092333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:39.143754   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:39.143790   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:39.156553   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:39.156587   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:39.220882   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:41.721766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:41.734823   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:41.734893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:41.768636   74485 cri.go:89] found id: ""
	I1105 19:12:41.768668   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.768685   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:41.768693   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:41.768750   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:41.809506   74485 cri.go:89] found id: ""
	I1105 19:12:41.809533   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.809541   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:41.809546   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:41.809606   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:41.849953   74485 cri.go:89] found id: ""
	I1105 19:12:41.849977   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.849985   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:41.849991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:41.850037   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:41.893042   74485 cri.go:89] found id: ""
	I1105 19:12:41.893072   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.893084   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:41.893091   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:41.893152   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:41.936259   74485 cri.go:89] found id: ""
	I1105 19:12:41.936282   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.936292   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:41.936298   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:41.936347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:41.970322   74485 cri.go:89] found id: ""
	I1105 19:12:41.970344   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.970353   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:41.970360   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:41.970427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:42.004351   74485 cri.go:89] found id: ""
	I1105 19:12:42.004375   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.004383   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:42.004388   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:42.004443   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:42.035136   74485 cri.go:89] found id: ""
	I1105 19:12:42.035163   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.035174   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:42.035185   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:42.035201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:42.086760   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:42.086801   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:42.100795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:42.100829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:42.167480   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:42.167509   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:42.167529   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:42.248625   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:42.248664   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.961606   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.461423   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:41.224956   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:43.724906   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.846509   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.847235   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.785100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:44.798182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:44.798248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:44.834080   74485 cri.go:89] found id: ""
	I1105 19:12:44.834107   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.834115   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:44.834120   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:44.834179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:44.870572   74485 cri.go:89] found id: ""
	I1105 19:12:44.870602   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.870613   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:44.870620   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:44.870691   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:44.908960   74485 cri.go:89] found id: ""
	I1105 19:12:44.908991   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.909002   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:44.909010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:44.909075   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:44.945310   74485 cri.go:89] found id: ""
	I1105 19:12:44.945342   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.945350   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:44.945355   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:44.945409   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:44.982893   74485 cri.go:89] found id: ""
	I1105 19:12:44.982935   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.982946   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:44.982953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:44.983030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:45.015529   74485 cri.go:89] found id: ""
	I1105 19:12:45.015559   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.015571   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:45.015578   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:45.015640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:45.047252   74485 cri.go:89] found id: ""
	I1105 19:12:45.047284   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.047295   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:45.047302   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:45.047364   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:45.082963   74485 cri.go:89] found id: ""
	I1105 19:12:45.083009   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.083018   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:45.083026   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:45.083039   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:45.131844   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:45.131881   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:45.145500   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:45.145530   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:45.214668   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:45.214709   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:45.214725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:45.291203   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:45.291243   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:44.963672   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.461610   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:46.223849   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:48.225352   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.346007   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:49.346691   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.831908   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:47.844873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:47.844957   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:47.881587   74485 cri.go:89] found id: ""
	I1105 19:12:47.881617   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.881628   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:47.881644   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:47.881714   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:47.918381   74485 cri.go:89] found id: ""
	I1105 19:12:47.918411   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.918423   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:47.918430   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:47.918491   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:47.950835   74485 cri.go:89] found id: ""
	I1105 19:12:47.950864   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.950880   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:47.950889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:47.950947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:47.985234   74485 cri.go:89] found id: ""
	I1105 19:12:47.985261   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.985272   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:47.985279   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:47.985338   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:48.019406   74485 cri.go:89] found id: ""
	I1105 19:12:48.019437   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.019448   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:48.019455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:48.019532   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:48.053126   74485 cri.go:89] found id: ""
	I1105 19:12:48.053160   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.053172   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:48.053180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:48.053241   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:48.086847   74485 cri.go:89] found id: ""
	I1105 19:12:48.086872   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.086879   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:48.086885   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:48.086944   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:48.122366   74485 cri.go:89] found id: ""
	I1105 19:12:48.122388   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.122396   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:48.122404   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:48.122421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:48.171579   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:48.171622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:48.185207   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:48.185234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:48.249553   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:48.249575   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:48.249586   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:48.323391   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:48.323427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:50.861939   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:50.874943   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:50.875041   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:50.911498   74485 cri.go:89] found id: ""
	I1105 19:12:50.911522   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.911530   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:50.911536   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:50.911591   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:50.946936   74485 cri.go:89] found id: ""
	I1105 19:12:50.946962   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.946988   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:50.947034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:50.947098   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:50.983220   74485 cri.go:89] found id: ""
	I1105 19:12:50.983246   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.983258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:50.983265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:50.983314   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:51.017052   74485 cri.go:89] found id: ""
	I1105 19:12:51.017078   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.017086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:51.017092   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:51.017141   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:51.051417   74485 cri.go:89] found id: ""
	I1105 19:12:51.051448   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.051459   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:51.051466   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:51.051529   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:51.085129   74485 cri.go:89] found id: ""
	I1105 19:12:51.085164   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.085177   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:51.085182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:51.085232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:51.122065   74485 cri.go:89] found id: ""
	I1105 19:12:51.122100   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.122113   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:51.122120   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:51.122178   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:51.154909   74485 cri.go:89] found id: ""
	I1105 19:12:51.154938   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.154946   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:51.154954   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:51.154966   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:51.167768   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:51.167798   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:51.231849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:51.231873   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:51.231897   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:51.314426   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:51.314487   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:51.356654   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:51.356685   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:49.961294   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.461707   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:50.723534   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.723821   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:51.347677   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.847328   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.911774   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:53.924884   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:53.924968   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:53.957690   74485 cri.go:89] found id: ""
	I1105 19:12:53.957719   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.957729   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:53.957737   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:53.957802   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:53.990717   74485 cri.go:89] found id: ""
	I1105 19:12:53.990744   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.990751   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:53.990757   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:53.990803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:54.023229   74485 cri.go:89] found id: ""
	I1105 19:12:54.023251   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.023258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:54.023263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:54.023320   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:54.056950   74485 cri.go:89] found id: ""
	I1105 19:12:54.056977   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.056987   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:54.056995   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:54.057056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:54.091729   74485 cri.go:89] found id: ""
	I1105 19:12:54.091756   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.091768   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:54.091776   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:54.091828   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:54.123964   74485 cri.go:89] found id: ""
	I1105 19:12:54.123991   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.124001   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:54.124009   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:54.124070   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:54.155164   74485 cri.go:89] found id: ""
	I1105 19:12:54.155195   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.155204   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:54.155209   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:54.155268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:54.188161   74485 cri.go:89] found id: ""
	I1105 19:12:54.188191   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.188202   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:54.188213   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:54.188226   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:54.240906   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:54.240941   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:54.254061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:54.254093   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:54.321973   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:54.322007   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:54.322026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:54.405106   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:54.405147   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:56.941801   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:56.954658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:56.954741   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:56.990372   74485 cri.go:89] found id: ""
	I1105 19:12:56.990400   74485 logs.go:282] 0 containers: []
	W1105 19:12:56.990411   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:56.990419   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:56.990479   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:57.023047   74485 cri.go:89] found id: ""
	I1105 19:12:57.023082   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.023093   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:57.023102   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:57.023163   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:57.054991   74485 cri.go:89] found id: ""
	I1105 19:12:57.055021   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.055030   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:57.055036   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:57.055094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:57.086182   74485 cri.go:89] found id: ""
	I1105 19:12:57.086214   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.086225   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:57.086233   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:57.086295   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:57.120322   74485 cri.go:89] found id: ""
	I1105 19:12:57.120350   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.120361   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:57.120368   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:57.120431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:57.153751   74485 cri.go:89] found id: ""
	I1105 19:12:57.153781   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.153790   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:57.153796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:57.153845   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:57.189208   74485 cri.go:89] found id: ""
	I1105 19:12:57.189234   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.189244   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:57.189251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:57.189317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:57.223259   74485 cri.go:89] found id: ""
	I1105 19:12:57.223292   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.223301   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:57.223308   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:57.223320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:57.273063   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:57.273098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:57.287759   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:57.287783   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:57.353387   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:57.353409   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:57.353421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:57.426374   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:57.426411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:54.462191   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.960479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:54.723926   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.724988   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.224704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:55.847609   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:58.347062   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.348243   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.965907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:59.979081   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:59.979149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:00.010955   74485 cri.go:89] found id: ""
	I1105 19:13:00.011001   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.011012   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:00.011021   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:00.011081   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:00.044800   74485 cri.go:89] found id: ""
	I1105 19:13:00.044825   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.044832   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:00.044838   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:00.044894   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:00.082999   74485 cri.go:89] found id: ""
	I1105 19:13:00.083040   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.083050   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:00.083059   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:00.083125   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:00.120792   74485 cri.go:89] found id: ""
	I1105 19:13:00.120826   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.120835   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:00.120840   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:00.120903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:00.153156   74485 cri.go:89] found id: ""
	I1105 19:13:00.153188   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.153200   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:00.153207   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:00.153273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:00.189039   74485 cri.go:89] found id: ""
	I1105 19:13:00.189066   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.189073   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:00.189079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:00.189143   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:00.220904   74485 cri.go:89] found id: ""
	I1105 19:13:00.220932   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.220942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:00.220950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:00.221012   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:00.255414   74485 cri.go:89] found id: ""
	I1105 19:13:00.255443   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.255454   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:00.255464   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:00.255480   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:00.329027   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:00.329050   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:00.329061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:00.405813   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:00.405847   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:00.443302   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:00.443332   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:00.498413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:00.498452   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:58.960870   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.962098   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:01.723865   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.724945   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:02.846369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:04.846751   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.011897   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:03.025351   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:03.025419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:03.058881   74485 cri.go:89] found id: ""
	I1105 19:13:03.058910   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.058920   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:03.058928   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:03.059018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:03.093549   74485 cri.go:89] found id: ""
	I1105 19:13:03.093580   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.093592   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:03.093600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:03.093660   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:03.132355   74485 cri.go:89] found id: ""
	I1105 19:13:03.132384   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.132395   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:03.132402   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:03.132463   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:03.164832   74485 cri.go:89] found id: ""
	I1105 19:13:03.164864   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.164875   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:03.164888   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:03.164947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:03.203187   74485 cri.go:89] found id: ""
	I1105 19:13:03.203213   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.203221   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:03.203226   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:03.203282   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:03.238867   74485 cri.go:89] found id: ""
	I1105 19:13:03.238899   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.238921   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:03.238928   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:03.239010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:03.276139   74485 cri.go:89] found id: ""
	I1105 19:13:03.276174   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.276187   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:03.276195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:03.276251   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:03.312588   74485 cri.go:89] found id: ""
	I1105 19:13:03.312613   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.312631   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:03.312639   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:03.312650   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:03.379754   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:03.379782   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:03.379797   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:03.455719   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:03.455754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.493428   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:03.493458   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:03.545447   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:03.545481   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.060213   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:06.074756   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:06.074831   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:06.111392   74485 cri.go:89] found id: ""
	I1105 19:13:06.111421   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.111429   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:06.111435   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:06.111493   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:06.147535   74485 cri.go:89] found id: ""
	I1105 19:13:06.147568   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.147579   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:06.147585   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:06.147646   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:06.183176   74485 cri.go:89] found id: ""
	I1105 19:13:06.183198   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.183205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:06.183211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:06.183262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:06.213957   74485 cri.go:89] found id: ""
	I1105 19:13:06.213983   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.213992   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:06.213997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:06.214060   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:06.251199   74485 cri.go:89] found id: ""
	I1105 19:13:06.251227   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.251234   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:06.251240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:06.251297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:06.288128   74485 cri.go:89] found id: ""
	I1105 19:13:06.288157   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.288167   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:06.288174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:06.288236   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:06.325265   74485 cri.go:89] found id: ""
	I1105 19:13:06.325296   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.325306   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:06.325314   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:06.325375   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:06.359649   74485 cri.go:89] found id: ""
	I1105 19:13:06.359689   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.359700   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:06.359710   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:06.359725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:06.408423   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:06.408456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.421776   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:06.421804   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:06.487464   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:06.487493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:06.487507   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:06.565789   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:06.565829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.461192   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.725002   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:08.225146   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:07.346498   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.347264   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.104578   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:09.117930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:09.118022   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:09.156055   74485 cri.go:89] found id: ""
	I1105 19:13:09.156083   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.156093   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:09.156101   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:09.156161   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:09.190470   74485 cri.go:89] found id: ""
	I1105 19:13:09.190499   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.190509   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:09.190516   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:09.190576   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:09.222568   74485 cri.go:89] found id: ""
	I1105 19:13:09.222595   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.222606   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:09.222612   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:09.222677   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:09.260251   74485 cri.go:89] found id: ""
	I1105 19:13:09.260282   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.260292   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:09.260300   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:09.260362   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:09.296006   74485 cri.go:89] found id: ""
	I1105 19:13:09.296036   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.296047   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:09.296054   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:09.296118   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:09.331213   74485 cri.go:89] found id: ""
	I1105 19:13:09.331246   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.331257   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:09.331265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:09.331333   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:09.364286   74485 cri.go:89] found id: ""
	I1105 19:13:09.364316   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.364327   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:09.364335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:09.364445   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:09.398060   74485 cri.go:89] found id: ""
	I1105 19:13:09.398084   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.398092   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:09.398101   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:09.398113   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:09.447373   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:09.447409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:09.461483   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:09.461514   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:09.528213   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:09.528236   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:09.528248   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:09.607397   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:09.607430   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.146158   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:12.159183   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:12.159262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:12.193917   74485 cri.go:89] found id: ""
	I1105 19:13:12.193952   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.193963   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:12.193971   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:12.194036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:12.226558   74485 cri.go:89] found id: ""
	I1105 19:13:12.226585   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.226594   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:12.226600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:12.226662   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:12.258437   74485 cri.go:89] found id: ""
	I1105 19:13:12.258469   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.258481   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:12.258488   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:12.258557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:12.291308   74485 cri.go:89] found id: ""
	I1105 19:13:12.291341   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.291353   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:12.291361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:12.291431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:12.325768   74485 cri.go:89] found id: ""
	I1105 19:13:12.325801   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.325812   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:12.325819   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:12.325884   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:12.361077   74485 cri.go:89] found id: ""
	I1105 19:13:12.361100   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.361108   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:12.361118   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:12.361179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:12.394769   74485 cri.go:89] found id: ""
	I1105 19:13:12.394791   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.394800   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:12.394806   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:12.394864   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:12.430138   74485 cri.go:89] found id: ""
	I1105 19:13:12.430167   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.430177   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:12.430189   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:12.430200   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.472596   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:12.472637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:12.523107   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:12.523143   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:12.535797   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:12.535824   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:12.604088   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:12.604108   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:12.604123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:08.460647   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.462830   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.225468   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.225693   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:11.849320   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.347487   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:15.185725   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:15.200158   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:15.200238   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:15.238309   74485 cri.go:89] found id: ""
	I1105 19:13:15.238334   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.238342   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:15.238349   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:15.238404   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:15.272897   74485 cri.go:89] found id: ""
	I1105 19:13:15.272927   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.272938   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:15.272945   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:15.273013   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:15.307700   74485 cri.go:89] found id: ""
	I1105 19:13:15.307726   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.307737   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:15.307744   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:15.307810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:15.340156   74485 cri.go:89] found id: ""
	I1105 19:13:15.340182   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.340196   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:15.340202   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:15.340252   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:15.375930   74485 cri.go:89] found id: ""
	I1105 19:13:15.375963   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.375971   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:15.375976   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:15.376031   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:15.409876   74485 cri.go:89] found id: ""
	I1105 19:13:15.409905   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.409915   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:15.409922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:15.409984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:15.442781   74485 cri.go:89] found id: ""
	I1105 19:13:15.442808   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.442819   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:15.442825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:15.442896   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:15.480578   74485 cri.go:89] found id: ""
	I1105 19:13:15.480606   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.480614   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:15.480623   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:15.480634   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:15.530910   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:15.530952   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:15.544351   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:15.544382   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:15.618345   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:15.618373   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:15.618396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:15.704408   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:15.704451   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:14.961408   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.961486   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.724130   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.724204   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.724704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.347818   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.846423   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.244882   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:18.258667   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:18.258758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:18.292140   74485 cri.go:89] found id: ""
	I1105 19:13:18.292163   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.292171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:18.292178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:18.292235   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:18.324954   74485 cri.go:89] found id: ""
	I1105 19:13:18.324979   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.324985   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:18.324991   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:18.325048   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:18.361943   74485 cri.go:89] found id: ""
	I1105 19:13:18.361972   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.361983   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:18.361991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:18.362062   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:18.396012   74485 cri.go:89] found id: ""
	I1105 19:13:18.396036   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.396044   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:18.396050   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:18.396097   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:18.428852   74485 cri.go:89] found id: ""
	I1105 19:13:18.428875   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.428883   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:18.428889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:18.428946   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:18.464364   74485 cri.go:89] found id: ""
	I1105 19:13:18.464390   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.464397   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:18.464404   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:18.464464   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:18.496478   74485 cri.go:89] found id: ""
	I1105 19:13:18.496505   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.496514   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:18.496519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:18.496577   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:18.530313   74485 cri.go:89] found id: ""
	I1105 19:13:18.530339   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.530348   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:18.530356   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:18.530368   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:18.582593   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:18.582627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:18.596580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:18.596616   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:18.663920   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:18.663959   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:18.663974   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:18.740706   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:18.740746   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.281614   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:21.295841   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:21.295919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:21.330832   74485 cri.go:89] found id: ""
	I1105 19:13:21.330856   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.330864   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:21.330869   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:21.330922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:21.365228   74485 cri.go:89] found id: ""
	I1105 19:13:21.365257   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.365265   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:21.365269   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:21.365317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:21.418675   74485 cri.go:89] found id: ""
	I1105 19:13:21.418702   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.418719   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:21.418727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:21.418793   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:21.453966   74485 cri.go:89] found id: ""
	I1105 19:13:21.453994   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.454003   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:21.454008   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:21.454058   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:21.492030   74485 cri.go:89] found id: ""
	I1105 19:13:21.492056   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.492067   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:21.492078   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:21.492128   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:21.529146   74485 cri.go:89] found id: ""
	I1105 19:13:21.529174   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.529183   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:21.529190   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:21.529250   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:21.566491   74485 cri.go:89] found id: ""
	I1105 19:13:21.566519   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.566528   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:21.566533   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:21.566595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:21.605720   74485 cri.go:89] found id: ""
	I1105 19:13:21.605745   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.605754   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:21.605762   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:21.605772   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:21.682385   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:21.682408   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:21.682420   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:21.764519   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:21.764557   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.805090   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:21.805117   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:21.857560   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:21.857593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:19.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.961995   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.224702   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.226864   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:20.850915   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.346819   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.347230   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:24.371420   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:24.384566   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:24.384634   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:24.416283   74485 cri.go:89] found id: ""
	I1105 19:13:24.416308   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.416319   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:24.416327   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:24.416388   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:24.452875   74485 cri.go:89] found id: ""
	I1105 19:13:24.452899   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.452907   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:24.452913   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:24.452964   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:24.489946   74485 cri.go:89] found id: ""
	I1105 19:13:24.489974   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.489992   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:24.490000   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:24.490056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:24.527348   74485 cri.go:89] found id: ""
	I1105 19:13:24.527377   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.527388   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:24.527395   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:24.527451   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:24.558992   74485 cri.go:89] found id: ""
	I1105 19:13:24.559024   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.559035   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:24.559047   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:24.559105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:24.591405   74485 cri.go:89] found id: ""
	I1105 19:13:24.591437   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.591448   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:24.591455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:24.591516   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.625002   74485 cri.go:89] found id: ""
	I1105 19:13:24.625031   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.625040   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:24.625048   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:24.625114   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:24.657867   74485 cri.go:89] found id: ""
	I1105 19:13:24.657896   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.657907   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:24.657918   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:24.657931   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:24.708444   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:24.708482   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:24.721771   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:24.721814   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:24.793946   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:24.793980   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:24.793996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:24.875130   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:24.875167   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:27.412872   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:27.426996   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:27.427072   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:27.462434   74485 cri.go:89] found id: ""
	I1105 19:13:27.462458   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.462468   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:27.462475   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:27.462536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:27.496916   74485 cri.go:89] found id: ""
	I1105 19:13:27.496951   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.496962   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:27.496969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:27.497035   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:27.528826   74485 cri.go:89] found id: ""
	I1105 19:13:27.528853   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.528861   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:27.528867   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:27.528919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:27.563164   74485 cri.go:89] found id: ""
	I1105 19:13:27.563193   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.563204   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:27.563210   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:27.563284   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:27.600136   74485 cri.go:89] found id: ""
	I1105 19:13:27.600164   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.600174   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:27.600180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:27.600247   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:27.634326   74485 cri.go:89] found id: ""
	I1105 19:13:27.634358   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.634368   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:27.634377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:27.634452   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.462295   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:26.961567   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.723935   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.725498   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.847362   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.349542   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.668154   74485 cri.go:89] found id: ""
	I1105 19:13:27.668185   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.668196   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:27.668203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:27.668263   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:27.706016   74485 cri.go:89] found id: ""
	I1105 19:13:27.706043   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.706051   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:27.706059   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:27.706071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:27.755890   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:27.755929   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:27.773038   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:27.773063   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:27.863392   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:27.863414   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:27.863429   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:27.949149   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:27.949185   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.489333   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:30.502794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:30.502878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:30.536263   74485 cri.go:89] found id: ""
	I1105 19:13:30.536289   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.536297   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:30.536302   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:30.536347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:30.570418   74485 cri.go:89] found id: ""
	I1105 19:13:30.570445   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.570455   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:30.570462   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:30.570523   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:30.601972   74485 cri.go:89] found id: ""
	I1105 19:13:30.602003   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.602013   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:30.602020   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:30.602086   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:30.634151   74485 cri.go:89] found id: ""
	I1105 19:13:30.634183   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.634195   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:30.634203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:30.634265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:30.666384   74485 cri.go:89] found id: ""
	I1105 19:13:30.666415   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.666425   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:30.666433   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:30.666498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:30.699587   74485 cri.go:89] found id: ""
	I1105 19:13:30.699619   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.699631   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:30.699639   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:30.699699   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:30.731917   74485 cri.go:89] found id: ""
	I1105 19:13:30.731972   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.731983   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:30.731990   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:30.732051   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:30.768807   74485 cri.go:89] found id: ""
	I1105 19:13:30.768832   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.768840   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:30.768849   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:30.768860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:30.848594   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:30.848626   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.889031   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:30.889067   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:30.940550   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:30.940588   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:30.953810   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:30.953845   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:31.023633   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:29.461686   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:31.961484   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.225024   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.723965   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.847298   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:35.347135   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:33.524150   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:33.539025   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:33.539112   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:33.584756   74485 cri.go:89] found id: ""
	I1105 19:13:33.584786   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.584799   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:33.584807   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:33.584869   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:33.624785   74485 cri.go:89] found id: ""
	I1105 19:13:33.624816   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.624829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:33.624836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:33.625025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:33.668750   74485 cri.go:89] found id: ""
	I1105 19:13:33.668783   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.668794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:33.668804   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:33.668867   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:33.701675   74485 cri.go:89] found id: ""
	I1105 19:13:33.701707   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.701735   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:33.701743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:33.701817   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:33.737368   74485 cri.go:89] found id: ""
	I1105 19:13:33.737393   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.737401   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:33.737407   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:33.737458   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:33.770589   74485 cri.go:89] found id: ""
	I1105 19:13:33.770620   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.770630   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:33.770638   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:33.770704   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:33.802635   74485 cri.go:89] found id: ""
	I1105 19:13:33.802668   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.802680   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:33.802687   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:33.802751   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:33.839274   74485 cri.go:89] found id: ""
	I1105 19:13:33.839301   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.839309   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:33.839317   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:33.839328   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:33.881049   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:33.881090   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:33.932704   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:33.932743   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:33.945979   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:33.946007   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:34.017355   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:34.017375   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:34.017390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:36.596284   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:36.608240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:36.608306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:36.641846   74485 cri.go:89] found id: ""
	I1105 19:13:36.641878   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.641887   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:36.641901   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:36.641966   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:36.676553   74485 cri.go:89] found id: ""
	I1105 19:13:36.676584   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.676595   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:36.676602   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:36.676669   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:36.711931   74485 cri.go:89] found id: ""
	I1105 19:13:36.711961   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.711972   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:36.711980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:36.712042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:36.748510   74485 cri.go:89] found id: ""
	I1105 19:13:36.748534   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.748542   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:36.748547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:36.748596   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:36.781869   74485 cri.go:89] found id: ""
	I1105 19:13:36.781899   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.781912   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:36.781922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:36.781983   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:36.816574   74485 cri.go:89] found id: ""
	I1105 19:13:36.816597   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.816605   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:36.816610   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:36.816658   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:36.852894   74485 cri.go:89] found id: ""
	I1105 19:13:36.852921   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.852928   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:36.852934   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:36.852996   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:36.891732   74485 cri.go:89] found id: ""
	I1105 19:13:36.891764   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.891783   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:36.891795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:36.891810   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:36.964948   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:36.964972   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:36.964987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:37.043727   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:37.043765   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:37.084306   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:37.084333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:37.133238   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:37.133274   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:34.461773   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:36.960440   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:34.724805   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.224830   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.227912   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.347383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.347770   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.647492   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:39.659944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:39.660025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:39.695382   74485 cri.go:89] found id: ""
	I1105 19:13:39.695405   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.695415   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:39.695422   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:39.695480   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:39.731807   74485 cri.go:89] found id: ""
	I1105 19:13:39.731833   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.731841   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:39.731846   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:39.731895   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:39.766913   74485 cri.go:89] found id: ""
	I1105 19:13:39.766945   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.766955   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:39.766963   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:39.767049   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:39.800265   74485 cri.go:89] found id: ""
	I1105 19:13:39.800288   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.800296   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:39.800301   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:39.800346   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:39.832753   74485 cri.go:89] found id: ""
	I1105 19:13:39.832781   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.832789   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:39.832794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:39.832843   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:39.865950   74485 cri.go:89] found id: ""
	I1105 19:13:39.865980   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.865990   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:39.865997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:39.866046   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:39.902918   74485 cri.go:89] found id: ""
	I1105 19:13:39.902948   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.902957   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:39.902962   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:39.903039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:39.935086   74485 cri.go:89] found id: ""
	I1105 19:13:39.935117   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.935129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:39.935139   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:39.935152   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:39.997935   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:39.997961   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:39.997976   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:40.076794   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:40.076852   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:40.114178   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:40.114209   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:40.163512   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:40.163550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:38.961003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:40.962241   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.724237   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:43.725317   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.847149   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:44.346097   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:42.676843   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:42.689855   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:42.689930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:42.724108   74485 cri.go:89] found id: ""
	I1105 19:13:42.724139   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.724148   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:42.724156   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:42.724218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:42.760816   74485 cri.go:89] found id: ""
	I1105 19:13:42.760844   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.760854   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:42.760861   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:42.760924   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:42.795111   74485 cri.go:89] found id: ""
	I1105 19:13:42.795134   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.795142   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:42.795147   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:42.795195   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:42.832964   74485 cri.go:89] found id: ""
	I1105 19:13:42.832988   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.832997   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:42.833003   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:42.833065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:42.868817   74485 cri.go:89] found id: ""
	I1105 19:13:42.868848   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.868858   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:42.868865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:42.868933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:42.902015   74485 cri.go:89] found id: ""
	I1105 19:13:42.902044   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.902051   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:42.902056   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:42.902146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:42.934298   74485 cri.go:89] found id: ""
	I1105 19:13:42.934322   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.934330   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:42.934335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:42.934385   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:42.969804   74485 cri.go:89] found id: ""
	I1105 19:13:42.969831   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.969843   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:42.969854   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:42.969873   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:43.019922   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:43.019959   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:43.033594   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:43.033622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:43.108220   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:43.108240   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:43.108251   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:43.191946   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:43.191987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:45.730728   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:45.743344   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:45.743419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:45.777693   74485 cri.go:89] found id: ""
	I1105 19:13:45.777728   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.777739   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:45.777747   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:45.777810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:45.810195   74485 cri.go:89] found id: ""
	I1105 19:13:45.810222   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.810233   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:45.810240   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:45.810308   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:45.851210   74485 cri.go:89] found id: ""
	I1105 19:13:45.851240   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.851247   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:45.851252   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:45.851311   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:45.885501   74485 cri.go:89] found id: ""
	I1105 19:13:45.885531   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.885540   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:45.885546   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:45.885595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:45.921638   74485 cri.go:89] found id: ""
	I1105 19:13:45.921667   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.921676   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:45.921684   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:45.921745   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:45.954341   74485 cri.go:89] found id: ""
	I1105 19:13:45.954373   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.954384   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:45.954394   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:45.954461   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:45.988840   74485 cri.go:89] found id: ""
	I1105 19:13:45.988865   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.988873   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:45.988879   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:45.988949   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:46.025409   74485 cri.go:89] found id: ""
	I1105 19:13:46.025441   74485 logs.go:282] 0 containers: []
	W1105 19:13:46.025458   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:46.025470   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:46.025486   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:46.037763   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:46.037787   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:46.112619   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:46.112663   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:46.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:46.192165   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:46.192199   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:46.233235   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:46.233263   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:42.962569   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:45.461256   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:47.461781   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.225004   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.723774   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.346687   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.787685   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:48.800681   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:48.800749   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:48.835344   74485 cri.go:89] found id: ""
	I1105 19:13:48.835366   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.835374   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:48.835383   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:48.835429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:48.867447   74485 cri.go:89] found id: ""
	I1105 19:13:48.867474   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.867483   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:48.867488   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:48.867536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:48.899135   74485 cri.go:89] found id: ""
	I1105 19:13:48.899160   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.899167   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:48.899172   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:48.899221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:48.932208   74485 cri.go:89] found id: ""
	I1105 19:13:48.932243   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.932255   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:48.932263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:48.932326   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:48.967174   74485 cri.go:89] found id: ""
	I1105 19:13:48.967202   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.967210   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:48.967215   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:48.967267   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:48.998902   74485 cri.go:89] found id: ""
	I1105 19:13:48.998932   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.998942   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:48.998950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:48.999030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:49.030946   74485 cri.go:89] found id: ""
	I1105 19:13:49.030988   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.030999   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:49.031006   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:49.031074   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:49.063489   74485 cri.go:89] found id: ""
	I1105 19:13:49.063517   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.063528   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:49.063540   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:49.063555   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:49.116433   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:49.116477   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:49.131439   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:49.131476   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:49.199770   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:49.199795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:49.199809   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:49.275503   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:49.275543   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:51.816208   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:51.829328   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:51.829399   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:51.863320   74485 cri.go:89] found id: ""
	I1105 19:13:51.863346   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.863354   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:51.863359   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:51.863406   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:51.896589   74485 cri.go:89] found id: ""
	I1105 19:13:51.896618   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.896628   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:51.896635   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:51.896697   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:51.933744   74485 cri.go:89] found id: ""
	I1105 19:13:51.933769   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.933776   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:51.933781   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:51.933829   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:51.970806   74485 cri.go:89] found id: ""
	I1105 19:13:51.970829   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.970836   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:51.970842   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:51.970889   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:52.004087   74485 cri.go:89] found id: ""
	I1105 19:13:52.004116   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.004124   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:52.004129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:52.004186   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:52.041721   74485 cri.go:89] found id: ""
	I1105 19:13:52.041752   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.041763   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:52.041771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:52.041835   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:52.079253   74485 cri.go:89] found id: ""
	I1105 19:13:52.079277   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.079285   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:52.079292   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:52.079351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:52.112604   74485 cri.go:89] found id: ""
	I1105 19:13:52.112642   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.112653   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:52.112664   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:52.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:52.160799   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:52.160841   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:52.174323   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:52.174355   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:52.247358   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:52.247383   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:52.247395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:52.326071   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:52.326108   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:49.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.461239   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.724514   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.724742   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.848418   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:53.346329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.347199   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:54.866454   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:54.879015   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:54.879093   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:54.911729   74485 cri.go:89] found id: ""
	I1105 19:13:54.911765   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.911777   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:54.911785   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:54.911846   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:54.943137   74485 cri.go:89] found id: ""
	I1105 19:13:54.943169   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.943185   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:54.943193   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:54.943253   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:54.977951   74485 cri.go:89] found id: ""
	I1105 19:13:54.977980   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.977991   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:54.977998   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:54.978061   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:55.009453   74485 cri.go:89] found id: ""
	I1105 19:13:55.009478   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.009486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:55.009491   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:55.009537   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:55.040790   74485 cri.go:89] found id: ""
	I1105 19:13:55.040814   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.040821   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:55.040827   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:55.040878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:55.073401   74485 cri.go:89] found id: ""
	I1105 19:13:55.073430   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.073441   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:55.073449   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:55.073508   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:55.105419   74485 cri.go:89] found id: ""
	I1105 19:13:55.105443   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.105451   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:55.105456   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:55.105511   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:55.137363   74485 cri.go:89] found id: ""
	I1105 19:13:55.137395   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.137406   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:55.137416   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:55.137431   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:55.174176   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:55.174201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:55.221658   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:55.221693   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:55.235044   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:55.235070   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:55.308192   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:55.308218   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:55.308234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:54.461424   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:56.961198   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.223920   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.224915   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.847329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:00.347371   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.892462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:57.905472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:57.905543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:57.946044   74485 cri.go:89] found id: ""
	I1105 19:13:57.946071   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.946081   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:57.946089   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:57.946149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:57.980762   74485 cri.go:89] found id: ""
	I1105 19:13:57.980791   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.980803   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:57.980811   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:57.980874   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:58.013351   74485 cri.go:89] found id: ""
	I1105 19:13:58.013374   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.013381   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:58.013386   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:58.013433   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:58.049056   74485 cri.go:89] found id: ""
	I1105 19:13:58.049083   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.049091   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:58.049097   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:58.049147   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:58.081476   74485 cri.go:89] found id: ""
	I1105 19:13:58.081507   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.081517   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:58.081524   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:58.081583   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:58.114526   74485 cri.go:89] found id: ""
	I1105 19:13:58.114554   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.114564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:58.114571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:58.114630   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:58.148219   74485 cri.go:89] found id: ""
	I1105 19:13:58.148243   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.148252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:58.148257   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:58.148312   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:58.183254   74485 cri.go:89] found id: ""
	I1105 19:13:58.183277   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.183285   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:58.183292   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:58.183304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:58.234747   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:58.234785   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:58.248269   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:58.248300   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:58.313290   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:58.313312   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:58.313327   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:58.389847   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:58.389889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:00.927957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:00.941525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:00.941593   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:00.974891   74485 cri.go:89] found id: ""
	I1105 19:14:00.974920   74485 logs.go:282] 0 containers: []
	W1105 19:14:00.974931   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:00.974938   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:00.975018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:01.008224   74485 cri.go:89] found id: ""
	I1105 19:14:01.008250   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.008262   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:01.008270   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:01.008328   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:01.044514   74485 cri.go:89] found id: ""
	I1105 19:14:01.044545   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.044553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:01.044559   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:01.044614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:01.077091   74485 cri.go:89] found id: ""
	I1105 19:14:01.077124   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.077135   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:01.077141   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:01.077197   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:01.109947   74485 cri.go:89] found id: ""
	I1105 19:14:01.109976   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.109986   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:01.109994   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:01.110054   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:01.146162   74485 cri.go:89] found id: ""
	I1105 19:14:01.146193   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.146203   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:01.146211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:01.146275   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:01.180335   74485 cri.go:89] found id: ""
	I1105 19:14:01.180360   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.180370   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:01.180377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:01.180436   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:01.216160   74485 cri.go:89] found id: ""
	I1105 19:14:01.216189   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.216199   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:01.216221   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:01.216236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:01.229426   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:01.229455   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:01.298847   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:01.298874   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:01.298889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:01.375255   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:01.375299   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:01.417946   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:01.418026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:59.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.961362   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:59.724103   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.724976   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.725344   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:02.349032   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:04.847734   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.973713   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:03.987128   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:03.987198   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:04.020050   74485 cri.go:89] found id: ""
	I1105 19:14:04.020081   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.020091   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:04.020098   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:04.020164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:04.053458   74485 cri.go:89] found id: ""
	I1105 19:14:04.053485   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.053492   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:04.053498   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:04.053544   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:04.086417   74485 cri.go:89] found id: ""
	I1105 19:14:04.086442   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.086455   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:04.086461   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:04.086513   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:04.122035   74485 cri.go:89] found id: ""
	I1105 19:14:04.122059   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.122067   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:04.122073   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:04.122120   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:04.158732   74485 cri.go:89] found id: ""
	I1105 19:14:04.158758   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.158765   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:04.158771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:04.158822   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:04.190497   74485 cri.go:89] found id: ""
	I1105 19:14:04.190525   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.190536   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:04.190543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:04.190604   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:04.222040   74485 cri.go:89] found id: ""
	I1105 19:14:04.222066   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.222074   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:04.222079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:04.222131   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:04.258753   74485 cri.go:89] found id: ""
	I1105 19:14:04.258781   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.258793   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:04.258804   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:04.258819   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:04.299966   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:04.300052   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:04.355364   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:04.355395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:04.368954   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:04.368980   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:04.431658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:04.431688   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:04.431700   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.015289   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:07.029580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:07.029644   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:07.066931   74485 cri.go:89] found id: ""
	I1105 19:14:07.066964   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.066993   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:07.067004   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:07.067059   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:07.104315   74485 cri.go:89] found id: ""
	I1105 19:14:07.104341   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.104349   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:07.104354   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:07.104401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:07.141271   74485 cri.go:89] found id: ""
	I1105 19:14:07.141298   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.141305   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:07.141311   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:07.141360   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:07.174600   74485 cri.go:89] found id: ""
	I1105 19:14:07.174631   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.174643   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:07.174653   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:07.174707   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:07.211920   74485 cri.go:89] found id: ""
	I1105 19:14:07.211958   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.211969   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:07.211975   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:07.212027   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:07.248238   74485 cri.go:89] found id: ""
	I1105 19:14:07.248269   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.248280   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:07.248286   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:07.248344   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:07.279833   74485 cri.go:89] found id: ""
	I1105 19:14:07.279864   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.279874   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:07.279881   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:07.279931   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:07.317411   74485 cri.go:89] found id: ""
	I1105 19:14:07.317441   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.317452   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:07.317461   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:07.317474   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:07.390499   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:07.390535   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:07.390556   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.488858   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:07.488895   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:07.528612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:07.528645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:07.581884   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:07.581927   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:03.961433   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.460953   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.223402   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:08.723797   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:07.348258   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:09.846465   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.096089   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:10.110828   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:10.110898   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:10.147299   74485 cri.go:89] found id: ""
	I1105 19:14:10.147332   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.147344   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:10.147350   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:10.147401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:10.181457   74485 cri.go:89] found id: ""
	I1105 19:14:10.181482   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.181489   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:10.181495   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:10.181540   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:10.215210   74485 cri.go:89] found id: ""
	I1105 19:14:10.215241   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.215252   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:10.215259   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:10.215319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:10.249587   74485 cri.go:89] found id: ""
	I1105 19:14:10.249609   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.249617   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:10.249625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:10.249679   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:10.282566   74485 cri.go:89] found id: ""
	I1105 19:14:10.282591   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.282598   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:10.282604   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:10.282672   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:10.314312   74485 cri.go:89] found id: ""
	I1105 19:14:10.314344   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.314355   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:10.314361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:10.314415   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:10.346988   74485 cri.go:89] found id: ""
	I1105 19:14:10.347016   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.347028   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:10.347035   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:10.347088   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:10.381326   74485 cri.go:89] found id: ""
	I1105 19:14:10.381354   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.381370   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:10.381380   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:10.381394   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:10.418311   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:10.418344   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:10.469559   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:10.469590   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:10.482394   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:10.482427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:10.551831   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:10.551854   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:10.551870   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:08.462072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.961478   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:12.724974   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:11.846737   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:14.346050   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:13.127576   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:13.143182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:13.143242   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:13.188794   74485 cri.go:89] found id: ""
	I1105 19:14:13.188827   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.188839   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:13.188846   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:13.188897   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:13.221790   74485 cri.go:89] found id: ""
	I1105 19:14:13.221818   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.221829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:13.221836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:13.221893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:13.255164   74485 cri.go:89] found id: ""
	I1105 19:14:13.255194   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.255205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:13.255212   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:13.255272   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:13.288203   74485 cri.go:89] found id: ""
	I1105 19:14:13.288231   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.288241   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:13.288249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:13.288307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:13.321438   74485 cri.go:89] found id: ""
	I1105 19:14:13.321463   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.321475   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:13.321482   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:13.321541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:13.361858   74485 cri.go:89] found id: ""
	I1105 19:14:13.361886   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.361897   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:13.361905   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:13.361979   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:13.394210   74485 cri.go:89] found id: ""
	I1105 19:14:13.394239   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.394252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:13.394260   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:13.394324   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:13.434665   74485 cri.go:89] found id: ""
	I1105 19:14:13.434697   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.434705   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:13.434712   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:13.434724   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:13.447849   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:13.447875   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:13.514353   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:13.514377   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:13.514390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:13.590746   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:13.590784   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:13.627704   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:13.627732   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:16.180171   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:16.193282   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:16.193342   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:16.230087   74485 cri.go:89] found id: ""
	I1105 19:14:16.230118   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.230128   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:16.230137   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:16.230200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:16.264315   74485 cri.go:89] found id: ""
	I1105 19:14:16.264348   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.264360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:16.264368   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:16.264429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:16.298197   74485 cri.go:89] found id: ""
	I1105 19:14:16.298231   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.298243   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:16.298251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:16.298316   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:16.333149   74485 cri.go:89] found id: ""
	I1105 19:14:16.333180   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.333193   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:16.333203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:16.333268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:16.366863   74485 cri.go:89] found id: ""
	I1105 19:14:16.366887   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.366895   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:16.366900   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:16.366947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:16.400434   74485 cri.go:89] found id: ""
	I1105 19:14:16.400458   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.400466   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:16.400472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:16.400524   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:16.435475   74485 cri.go:89] found id: ""
	I1105 19:14:16.435497   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.435504   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:16.435510   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:16.435560   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:16.470577   74485 cri.go:89] found id: ""
	I1105 19:14:16.470604   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.470612   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:16.470620   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:16.470632   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:16.483061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:16.483094   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:16.550662   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:16.550690   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:16.550702   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:16.629372   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:16.629411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:16.669488   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:16.669526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:12.961576   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.461132   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.461748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.224068   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.225065   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:16.347305   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:18.847161   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.219244   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:19.232682   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:19.232744   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:19.264594   74485 cri.go:89] found id: ""
	I1105 19:14:19.264624   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.264635   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:19.264649   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:19.264708   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:19.301434   74485 cri.go:89] found id: ""
	I1105 19:14:19.301468   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.301479   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:19.301487   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:19.301558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:19.333465   74485 cri.go:89] found id: ""
	I1105 19:14:19.333494   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.333502   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:19.333508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:19.333558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:19.365865   74485 cri.go:89] found id: ""
	I1105 19:14:19.365892   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.365900   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:19.365906   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:19.365958   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:19.406533   74485 cri.go:89] found id: ""
	I1105 19:14:19.406563   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.406575   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:19.406583   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:19.406639   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:19.439351   74485 cri.go:89] found id: ""
	I1105 19:14:19.439377   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.439386   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:19.439392   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:19.439438   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:19.475033   74485 cri.go:89] found id: ""
	I1105 19:14:19.475058   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.475065   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:19.475070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:19.475119   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:19.508638   74485 cri.go:89] found id: ""
	I1105 19:14:19.508662   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.508670   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:19.508678   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:19.508689   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:19.588268   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:19.588293   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:19.588304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:19.671382   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:19.671415   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:19.716497   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:19.716526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:19.769686   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:19.769722   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.283476   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:22.296393   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:22.296456   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:22.331226   74485 cri.go:89] found id: ""
	I1105 19:14:22.331247   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.331255   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:22.331261   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:22.331306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:22.363466   74485 cri.go:89] found id: ""
	I1105 19:14:22.363499   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.363510   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:22.363518   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:22.363586   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:22.397025   74485 cri.go:89] found id: ""
	I1105 19:14:22.397052   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.397061   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:22.397066   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:22.397116   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:22.429450   74485 cri.go:89] found id: ""
	I1105 19:14:22.429476   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.429486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:22.429493   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:22.429554   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:22.461615   74485 cri.go:89] found id: ""
	I1105 19:14:22.461643   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.461654   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:22.461660   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:22.461728   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:22.492470   74485 cri.go:89] found id: ""
	I1105 19:14:22.492502   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.492513   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:22.492521   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:22.492587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:22.525335   74485 cri.go:89] found id: ""
	I1105 19:14:22.525358   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.525366   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:22.525372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:22.525423   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:22.558854   74485 cri.go:89] found id: ""
	I1105 19:14:22.558881   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.558890   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:22.558901   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:22.558916   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:22.608638   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:22.608674   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.621769   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:22.621800   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:14:19.461812   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.960286   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.724482   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:22.224505   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:24.225072   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.347018   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:23.347099   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	W1105 19:14:22.688971   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:22.688998   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:22.689012   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:22.770517   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:22.770558   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:25.315778   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:25.335372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:25.335444   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:25.383988   74485 cri.go:89] found id: ""
	I1105 19:14:25.384019   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.384029   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:25.384036   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:25.384096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:25.432070   74485 cri.go:89] found id: ""
	I1105 19:14:25.432103   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.432115   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:25.432122   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:25.432184   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:25.464859   74485 cri.go:89] found id: ""
	I1105 19:14:25.464891   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.464902   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:25.464909   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:25.464976   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:25.498684   74485 cri.go:89] found id: ""
	I1105 19:14:25.498712   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.498719   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:25.498724   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:25.498777   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:25.532998   74485 cri.go:89] found id: ""
	I1105 19:14:25.533023   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.533032   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:25.533039   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:25.533084   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:25.568101   74485 cri.go:89] found id: ""
	I1105 19:14:25.568130   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.568138   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:25.568144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:25.568208   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:25.600470   74485 cri.go:89] found id: ""
	I1105 19:14:25.600495   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.600503   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:25.600509   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:25.600564   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:25.631792   74485 cri.go:89] found id: ""
	I1105 19:14:25.631824   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.631834   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:25.631845   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:25.631860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:25.683820   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:25.683856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:25.698066   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:25.698095   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:25.764838   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:25.764869   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:25.764886   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:25.838791   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:25.838828   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:23.966002   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.460153   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.724324   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:29.223490   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:25.847528   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.346739   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.376183   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:28.389686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:28.389760   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:28.424180   74485 cri.go:89] found id: ""
	I1105 19:14:28.424209   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.424221   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:28.424229   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:28.424289   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:28.462742   74485 cri.go:89] found id: ""
	I1105 19:14:28.462765   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.462777   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:28.462784   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:28.462839   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:28.494550   74485 cri.go:89] found id: ""
	I1105 19:14:28.494574   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.494581   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:28.494588   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:28.494667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:28.525606   74485 cri.go:89] found id: ""
	I1105 19:14:28.525632   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.525639   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:28.525645   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:28.525696   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:28.558599   74485 cri.go:89] found id: ""
	I1105 19:14:28.558628   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.558638   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:28.558644   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:28.558701   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:28.590496   74485 cri.go:89] found id: ""
	I1105 19:14:28.590522   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.590530   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:28.590535   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:28.590599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:28.622748   74485 cri.go:89] found id: ""
	I1105 19:14:28.622772   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.622780   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:28.622786   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:28.622836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:28.656452   74485 cri.go:89] found id: ""
	I1105 19:14:28.656477   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.656485   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:28.656493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:28.656504   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.736458   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:28.736505   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:28.771923   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:28.771954   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:28.821099   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:28.821133   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:28.834698   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:28.834726   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:28.900543   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.400733   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:31.414573   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:31.414647   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:31.452244   74485 cri.go:89] found id: ""
	I1105 19:14:31.452275   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.452286   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:31.452293   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:31.452353   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:31.485898   74485 cri.go:89] found id: ""
	I1105 19:14:31.485920   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.485935   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:31.485940   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:31.486009   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:31.522826   74485 cri.go:89] found id: ""
	I1105 19:14:31.522850   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.522858   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:31.522865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:31.522925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:31.560096   74485 cri.go:89] found id: ""
	I1105 19:14:31.560136   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.560164   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:31.560174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:31.560234   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:31.596698   74485 cri.go:89] found id: ""
	I1105 19:14:31.596725   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.596733   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:31.596738   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:31.596792   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:31.635109   74485 cri.go:89] found id: ""
	I1105 19:14:31.635138   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.635148   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:31.635156   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:31.635221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:31.667612   74485 cri.go:89] found id: ""
	I1105 19:14:31.667639   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.667651   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:31.667658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:31.667726   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:31.699815   74485 cri.go:89] found id: ""
	I1105 19:14:31.699844   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.699854   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:31.699864   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:31.699879   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:31.737165   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:31.737196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:31.788513   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:31.788550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:31.801580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:31.801609   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:31.871658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.871683   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:31.871696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.462108   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.961875   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:31.223977   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:33.724027   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.847090   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:32.847233   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.847857   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.450954   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:34.466129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:34.466204   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:34.499984   74485 cri.go:89] found id: ""
	I1105 19:14:34.500009   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.500020   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:34.500027   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:34.500091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:34.532923   74485 cri.go:89] found id: ""
	I1105 19:14:34.532950   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.532958   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:34.532969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:34.533017   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:34.566772   74485 cri.go:89] found id: ""
	I1105 19:14:34.566803   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.566811   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:34.566817   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:34.566872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:34.607398   74485 cri.go:89] found id: ""
	I1105 19:14:34.607422   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.607430   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:34.607435   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:34.607497   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:34.640091   74485 cri.go:89] found id: ""
	I1105 19:14:34.640123   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.640135   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:34.640143   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:34.640207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:34.677164   74485 cri.go:89] found id: ""
	I1105 19:14:34.677201   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.677211   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:34.677217   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:34.677266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:34.714900   74485 cri.go:89] found id: ""
	I1105 19:14:34.714931   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.714942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:34.714949   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:34.715023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:34.751003   74485 cri.go:89] found id: ""
	I1105 19:14:34.751032   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.751040   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:34.751048   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:34.751059   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:34.822279   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:34.822301   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:34.822315   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:34.898607   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:34.898640   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:34.934727   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:34.934754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:34.985935   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:34.985969   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.500117   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:37.512467   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:37.512541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:37.544914   74485 cri.go:89] found id: ""
	I1105 19:14:37.544941   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.544952   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:37.544959   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:37.545028   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:37.581507   74485 cri.go:89] found id: ""
	I1105 19:14:37.581535   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.581545   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:37.581553   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:37.581612   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:37.615546   74485 cri.go:89] found id: ""
	I1105 19:14:37.615576   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.615585   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:37.615592   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:37.615667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:37.648239   74485 cri.go:89] found id: ""
	I1105 19:14:37.648267   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.648276   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:37.648283   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:37.648343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:33.460860   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:35.461416   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:36.224852   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:38.725488   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.347563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:39.347732   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.682861   74485 cri.go:89] found id: ""
	I1105 19:14:37.682891   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.682898   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:37.682904   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:37.682952   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:37.715506   74485 cri.go:89] found id: ""
	I1105 19:14:37.715532   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.715540   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:37.715547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:37.715597   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:37.747973   74485 cri.go:89] found id: ""
	I1105 19:14:37.748003   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.748014   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:37.748022   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:37.748083   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:37.780270   74485 cri.go:89] found id: ""
	I1105 19:14:37.780294   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.780302   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:37.780310   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:37.780321   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.793885   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:37.793914   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:37.860114   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:37.860140   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:37.860154   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:37.941221   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:37.941255   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.980537   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:37.980567   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.532301   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:40.545540   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:40.545599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:40.578642   74485 cri.go:89] found id: ""
	I1105 19:14:40.578687   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.578699   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:40.578707   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:40.578772   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:40.612049   74485 cri.go:89] found id: ""
	I1105 19:14:40.612078   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.612089   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:40.612097   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:40.612159   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:40.644495   74485 cri.go:89] found id: ""
	I1105 19:14:40.644519   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.644527   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:40.644532   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:40.644587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:40.676890   74485 cri.go:89] found id: ""
	I1105 19:14:40.676923   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.676931   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:40.676937   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:40.676984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:40.710095   74485 cri.go:89] found id: ""
	I1105 19:14:40.710125   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.710136   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:40.710144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:40.710200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:40.748323   74485 cri.go:89] found id: ""
	I1105 19:14:40.748353   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.748364   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:40.748372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:40.748501   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:40.781578   74485 cri.go:89] found id: ""
	I1105 19:14:40.781606   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.781618   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:40.781626   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:40.781689   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:40.816010   74485 cri.go:89] found id: ""
	I1105 19:14:40.816048   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.816060   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:40.816071   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:40.816086   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.869836   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:40.869876   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:40.883436   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:40.883471   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:40.946538   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:40.946566   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:40.946585   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:41.023085   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:41.023123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.962163   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.461278   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.726894   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.224939   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:41.847053   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:44.346789   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.566841   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:43.579425   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:43.579498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:43.620500   74485 cri.go:89] found id: ""
	I1105 19:14:43.620526   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.620535   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:43.620541   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:43.620600   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:43.652992   74485 cri.go:89] found id: ""
	I1105 19:14:43.653024   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.653035   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:43.653042   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:43.653105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:43.686945   74485 cri.go:89] found id: ""
	I1105 19:14:43.686991   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.687003   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:43.687010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:43.687124   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:43.720075   74485 cri.go:89] found id: ""
	I1105 19:14:43.720103   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.720114   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:43.720121   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:43.720179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:43.757969   74485 cri.go:89] found id: ""
	I1105 19:14:43.757997   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.758005   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:43.758011   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:43.758071   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:43.790068   74485 cri.go:89] found id: ""
	I1105 19:14:43.790094   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.790103   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:43.790109   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:43.790153   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:43.821696   74485 cri.go:89] found id: ""
	I1105 19:14:43.821722   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.821733   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:43.821741   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:43.821803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:43.855976   74485 cri.go:89] found id: ""
	I1105 19:14:43.856003   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.856011   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:43.856019   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:43.856029   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:43.934375   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:43.934409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:43.972567   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:43.972597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:44.025660   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:44.025696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:44.039229   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:44.039258   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:44.112179   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:46.612815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:46.626070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:46.626145   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:46.659184   74485 cri.go:89] found id: ""
	I1105 19:14:46.659210   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.659218   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:46.659227   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:46.659288   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:46.691887   74485 cri.go:89] found id: ""
	I1105 19:14:46.691917   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.691928   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:46.691934   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:46.692003   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:46.725745   74485 cri.go:89] found id: ""
	I1105 19:14:46.725776   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.725787   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:46.725795   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:46.725847   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:46.761733   74485 cri.go:89] found id: ""
	I1105 19:14:46.761762   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.761773   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:46.761780   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:46.761842   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:46.792926   74485 cri.go:89] found id: ""
	I1105 19:14:46.792955   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.792966   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:46.792974   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:46.793036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:46.824462   74485 cri.go:89] found id: ""
	I1105 19:14:46.824503   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.824512   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:46.824519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:46.824580   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:46.865057   74485 cri.go:89] found id: ""
	I1105 19:14:46.865082   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.865090   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:46.865095   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:46.865146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:46.901357   74485 cri.go:89] found id: ""
	I1105 19:14:46.901385   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.901393   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:46.901401   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:46.901414   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:46.951986   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:46.952021   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:46.966035   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:46.966065   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:47.035163   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:47.035184   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:47.035196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:47.115825   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:47.115860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:42.961397   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.460846   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.724189   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.724319   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:46.847553   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.346787   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.658737   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:49.672088   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:49.672182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:49.708638   74485 cri.go:89] found id: ""
	I1105 19:14:49.708666   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.708674   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:49.708679   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:49.708736   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:49.744485   74485 cri.go:89] found id: ""
	I1105 19:14:49.744513   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.744521   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:49.744526   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:49.744572   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:49.779758   74485 cri.go:89] found id: ""
	I1105 19:14:49.779785   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.779794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:49.779800   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:49.779858   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:49.814216   74485 cri.go:89] found id: ""
	I1105 19:14:49.814248   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.814256   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:49.814262   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:49.814310   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:49.851348   74485 cri.go:89] found id: ""
	I1105 19:14:49.851377   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.851389   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:49.851396   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:49.851455   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:49.883866   74485 cri.go:89] found id: ""
	I1105 19:14:49.883897   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.883906   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:49.883912   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:49.883959   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:49.916944   74485 cri.go:89] found id: ""
	I1105 19:14:49.916967   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.916975   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:49.916980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:49.917039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:49.950405   74485 cri.go:89] found id: ""
	I1105 19:14:49.950437   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.950449   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:49.950459   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:49.950475   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:49.996064   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:49.996102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:50.044865   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:50.044902   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:50.058206   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:50.058236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:50.130371   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:50.130397   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:50.130412   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:49.960550   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.961271   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.724896   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.224128   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.346823   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:53.847102   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.706441   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:52.719571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:52.719655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:52.753850   74485 cri.go:89] found id: ""
	I1105 19:14:52.753880   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.753891   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:52.753899   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:52.753961   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:52.794112   74485 cri.go:89] found id: ""
	I1105 19:14:52.794139   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.794149   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:52.794156   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:52.794218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:52.830151   74485 cri.go:89] found id: ""
	I1105 19:14:52.830178   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.830188   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:52.830195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:52.830258   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:52.864803   74485 cri.go:89] found id: ""
	I1105 19:14:52.864832   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.864853   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:52.864868   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:52.864930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:52.897237   74485 cri.go:89] found id: ""
	I1105 19:14:52.897271   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.897282   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:52.897289   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:52.897351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:52.932236   74485 cri.go:89] found id: ""
	I1105 19:14:52.932262   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.932270   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:52.932275   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:52.932319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:52.965781   74485 cri.go:89] found id: ""
	I1105 19:14:52.965808   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.965817   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:52.965825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:52.965918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:52.999098   74485 cri.go:89] found id: ""
	I1105 19:14:52.999121   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.999129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:52.999137   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:52.999146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:53.051085   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:53.051127   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:53.064690   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:53.064717   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:53.128334   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:53.128358   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:53.128372   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:53.207751   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:53.207791   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:55.745430   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:55.758734   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:55.758821   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:55.791827   74485 cri.go:89] found id: ""
	I1105 19:14:55.791854   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.791862   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:55.791868   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:55.791922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:55.824191   74485 cri.go:89] found id: ""
	I1105 19:14:55.824217   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.824224   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:55.824230   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:55.824278   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:55.858579   74485 cri.go:89] found id: ""
	I1105 19:14:55.858611   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.858619   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:55.858625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:55.858673   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:55.891579   74485 cri.go:89] found id: ""
	I1105 19:14:55.891604   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.891612   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:55.891617   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:55.891663   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:55.924881   74485 cri.go:89] found id: ""
	I1105 19:14:55.924910   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.924920   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:55.924930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:55.924999   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:55.956634   74485 cri.go:89] found id: ""
	I1105 19:14:55.956663   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.956678   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:55.956686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:55.956742   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:55.988770   74485 cri.go:89] found id: ""
	I1105 19:14:55.988803   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.988814   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:55.988821   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:55.988880   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:56.022236   74485 cri.go:89] found id: ""
	I1105 19:14:56.022257   74485 logs.go:282] 0 containers: []
	W1105 19:14:56.022266   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:56.022273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:56.022284   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:56.073035   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:56.073071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:56.086899   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:56.086923   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:56.158219   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:56.158247   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:56.158259   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:56.246621   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:56.246660   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:53.962537   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.461516   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:54.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.725381   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:59.223995   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:55.847591   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.346027   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:00.349718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.791443   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:58.804398   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:58.804476   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:58.837812   74485 cri.go:89] found id: ""
	I1105 19:14:58.837840   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.837856   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:58.837863   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:58.837926   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:58.870154   74485 cri.go:89] found id: ""
	I1105 19:14:58.870186   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.870197   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:58.870204   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:58.870268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:58.906518   74485 cri.go:89] found id: ""
	I1105 19:14:58.906545   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.906553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:58.906563   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:58.906614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:58.939320   74485 cri.go:89] found id: ""
	I1105 19:14:58.939346   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.939357   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:58.939364   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:58.939426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:58.974116   74485 cri.go:89] found id: ""
	I1105 19:14:58.974143   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.974153   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:58.974160   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:58.974221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:59.006820   74485 cri.go:89] found id: ""
	I1105 19:14:59.006854   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.006866   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:59.006873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:59.006933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:59.039691   74485 cri.go:89] found id: ""
	I1105 19:14:59.039723   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.039735   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:59.039742   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:59.039800   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:59.071829   74485 cri.go:89] found id: ""
	I1105 19:14:59.071860   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.071881   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:59.071893   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:59.071906   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:59.124158   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:59.124195   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:59.138563   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:59.138594   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:59.216148   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:59.216174   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:59.216189   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:59.295262   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:59.295297   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:01.833789   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:01.847332   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:01.847408   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:01.882721   74485 cri.go:89] found id: ""
	I1105 19:15:01.882743   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.882750   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:01.882755   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:01.882811   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:01.916457   74485 cri.go:89] found id: ""
	I1105 19:15:01.916479   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.916487   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:01.916502   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:01.916557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:01.950521   74485 cri.go:89] found id: ""
	I1105 19:15:01.950552   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.950564   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:01.950571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:01.950624   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:01.985823   74485 cri.go:89] found id: ""
	I1105 19:15:01.985852   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.985862   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:01.985870   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:01.985918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:02.021689   74485 cri.go:89] found id: ""
	I1105 19:15:02.021720   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.021731   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:02.021739   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:02.021804   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:02.058632   74485 cri.go:89] found id: ""
	I1105 19:15:02.058658   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.058666   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:02.058672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:02.058738   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:02.097916   74485 cri.go:89] found id: ""
	I1105 19:15:02.097947   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.097956   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:02.097961   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:02.098010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:02.131992   74485 cri.go:89] found id: ""
	I1105 19:15:02.132027   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.132038   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:02.132050   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:02.132066   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:02.188605   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:02.188645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:02.201873   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:02.201904   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:02.274767   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:02.274795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:02.274811   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:02.358520   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:02.358559   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:58.962072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.461009   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.224719   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:03.724333   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:02.847593   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.348665   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:04.897693   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:04.913131   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:04.913207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:04.952546   74485 cri.go:89] found id: ""
	I1105 19:15:04.952571   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.952579   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:04.952584   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:04.952643   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:04.987334   74485 cri.go:89] found id: ""
	I1105 19:15:04.987360   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.987368   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:04.987374   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:04.987434   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:05.021873   74485 cri.go:89] found id: ""
	I1105 19:15:05.021906   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.021919   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:05.021926   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:05.021985   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:05.056169   74485 cri.go:89] found id: ""
	I1105 19:15:05.056199   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.056208   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:05.056213   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:05.056265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:05.093090   74485 cri.go:89] found id: ""
	I1105 19:15:05.093117   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.093125   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:05.093130   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:05.093182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:05.127533   74485 cri.go:89] found id: ""
	I1105 19:15:05.127557   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.127564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:05.127576   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:05.127625   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:05.165127   74485 cri.go:89] found id: ""
	I1105 19:15:05.165162   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.165173   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:05.165180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:05.165243   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:05.200526   74485 cri.go:89] found id: ""
	I1105 19:15:05.200556   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.200567   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:05.200578   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:05.200593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:05.247497   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:05.247535   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:05.261963   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:05.261996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:05.336813   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:05.336833   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:05.336844   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:05.412278   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:05.412320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:03.461266   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.463142   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.728530   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:08.227700   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.848748   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:10.346754   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.951085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:07.966125   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:07.966203   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:08.004253   74485 cri.go:89] found id: ""
	I1105 19:15:08.004291   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.004302   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:08.004310   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:08.004373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:08.039539   74485 cri.go:89] found id: ""
	I1105 19:15:08.039562   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.039569   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:08.039575   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:08.039629   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:08.076043   74485 cri.go:89] found id: ""
	I1105 19:15:08.076080   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.076093   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:08.076101   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:08.076157   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:08.110489   74485 cri.go:89] found id: ""
	I1105 19:15:08.110512   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.110519   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:08.110525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:08.110589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:08.147532   74485 cri.go:89] found id: ""
	I1105 19:15:08.147564   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.147574   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:08.147580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:08.147628   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:08.182225   74485 cri.go:89] found id: ""
	I1105 19:15:08.182248   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.182256   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:08.182263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:08.182322   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:08.223488   74485 cri.go:89] found id: ""
	I1105 19:15:08.223524   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.223536   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:08.223544   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:08.223610   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:08.266524   74485 cri.go:89] found id: ""
	I1105 19:15:08.266559   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.266571   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:08.266582   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:08.266597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:08.279036   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:08.279061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:08.346030   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:08.346052   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:08.346064   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:08.428081   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:08.428118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:08.464760   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:08.464789   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.016193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:11.030598   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:11.030681   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:11.066035   74485 cri.go:89] found id: ""
	I1105 19:15:11.066064   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.066073   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:11.066078   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:11.066133   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:11.103906   74485 cri.go:89] found id: ""
	I1105 19:15:11.103937   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.103948   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:11.103955   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:11.104023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:11.142936   74485 cri.go:89] found id: ""
	I1105 19:15:11.143024   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.143034   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:11.143041   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:11.143091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:11.180041   74485 cri.go:89] found id: ""
	I1105 19:15:11.180074   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.180086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:11.180094   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:11.180158   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:11.215661   74485 cri.go:89] found id: ""
	I1105 19:15:11.215693   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.215701   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:11.215707   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:11.215758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:11.252603   74485 cri.go:89] found id: ""
	I1105 19:15:11.252651   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.252663   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:11.252672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:11.252739   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:11.299295   74485 cri.go:89] found id: ""
	I1105 19:15:11.299328   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.299340   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:11.299347   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:11.299402   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:11.355153   74485 cri.go:89] found id: ""
	I1105 19:15:11.355177   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.355185   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:11.355193   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:11.355206   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:11.441076   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:11.441118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:11.480367   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:11.480396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.534646   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:11.534683   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:11.548141   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:11.548170   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:11.616452   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:07.961073   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:09.962118   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.455874   73732 pod_ready.go:82] duration metric: took 4m0.000853559s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:12.455911   73732 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:15:12.455936   73732 pod_ready.go:39] duration metric: took 4m14.55377544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:12.455984   73732 kubeadm.go:597] duration metric: took 4m23.030552871s to restartPrimaryControlPlane
	W1105 19:15:12.456078   73732 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:12.456111   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:10.724247   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.725886   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.846646   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.848074   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.117448   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:14.131224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:14.131297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:14.167811   74485 cri.go:89] found id: ""
	I1105 19:15:14.167843   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.167855   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:14.167862   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:14.167921   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:14.204128   74485 cri.go:89] found id: ""
	I1105 19:15:14.204156   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.204164   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:14.204169   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:14.204232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:14.240687   74485 cri.go:89] found id: ""
	I1105 19:15:14.240716   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.240727   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:14.240735   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:14.240788   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:14.274204   74485 cri.go:89] found id: ""
	I1105 19:15:14.274231   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.274242   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:14.274249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:14.274307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:14.312090   74485 cri.go:89] found id: ""
	I1105 19:15:14.312119   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.312130   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:14.312139   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:14.312200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:14.346824   74485 cri.go:89] found id: ""
	I1105 19:15:14.346857   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.346868   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:14.346875   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:14.346934   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:14.380634   74485 cri.go:89] found id: ""
	I1105 19:15:14.380668   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.380679   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:14.380686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:14.380746   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:14.414402   74485 cri.go:89] found id: ""
	I1105 19:15:14.414432   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.414441   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:14.414449   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:14.414459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:14.464542   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:14.464581   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:14.478195   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:14.478225   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:14.553670   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:14.553693   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:14.553708   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:14.634619   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:14.634659   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.174085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:17.191712   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:17.191771   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:17.234101   74485 cri.go:89] found id: ""
	I1105 19:15:17.234132   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.234143   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:17.234149   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:17.234213   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:17.281548   74485 cri.go:89] found id: ""
	I1105 19:15:17.281574   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.281581   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:17.281588   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:17.281655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:17.337698   74485 cri.go:89] found id: ""
	I1105 19:15:17.337727   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.337735   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:17.337743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:17.337790   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:17.371756   74485 cri.go:89] found id: ""
	I1105 19:15:17.371782   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.371790   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:17.371796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:17.371854   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:17.404989   74485 cri.go:89] found id: ""
	I1105 19:15:17.405015   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.405026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:17.405033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:17.405096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:17.438613   74485 cri.go:89] found id: ""
	I1105 19:15:17.438637   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.438648   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:17.438656   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:17.438717   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:17.470465   74485 cri.go:89] found id: ""
	I1105 19:15:17.470494   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.470502   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:17.470508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:17.470558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:17.503835   74485 cri.go:89] found id: ""
	I1105 19:15:17.503867   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.503876   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:17.503884   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:17.503896   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:17.584110   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:17.584146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.626928   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:17.626955   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:15.223749   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.225434   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.347847   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:19.847047   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.679356   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:17.679397   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:17.693476   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:17.693506   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:17.766809   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.266926   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:20.282219   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:20.282293   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:20.322133   74485 cri.go:89] found id: ""
	I1105 19:15:20.322163   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.322171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:20.322178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:20.322248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:20.357030   74485 cri.go:89] found id: ""
	I1105 19:15:20.357072   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.357084   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:20.357091   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:20.357156   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:20.390523   74485 cri.go:89] found id: ""
	I1105 19:15:20.390549   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.390559   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:20.390567   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:20.390631   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:20.425807   74485 cri.go:89] found id: ""
	I1105 19:15:20.425830   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.425837   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:20.425843   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:20.425903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:20.461984   74485 cri.go:89] found id: ""
	I1105 19:15:20.462014   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.462026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:20.462033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:20.462094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:20.495689   74485 cri.go:89] found id: ""
	I1105 19:15:20.495725   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.495739   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:20.495746   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:20.495799   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:20.528666   74485 cri.go:89] found id: ""
	I1105 19:15:20.528701   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.528713   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:20.528721   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:20.528783   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:20.562566   74485 cri.go:89] found id: ""
	I1105 19:15:20.562596   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.562606   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:20.562614   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:20.562624   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:20.610961   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:20.611000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:20.623898   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:20.623928   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:20.696412   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.696440   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:20.696456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:20.779601   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:20.779642   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:19.725198   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.224019   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.225286   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.347992   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.846718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:23.319846   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:23.333278   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:23.333357   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:23.370771   74485 cri.go:89] found id: ""
	I1105 19:15:23.370796   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.370805   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:23.370810   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:23.370872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:23.405994   74485 cri.go:89] found id: ""
	I1105 19:15:23.406021   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.406029   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:23.406034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:23.406092   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:23.443729   74485 cri.go:89] found id: ""
	I1105 19:15:23.443757   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.443767   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:23.443774   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:23.443836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:23.476162   74485 cri.go:89] found id: ""
	I1105 19:15:23.476188   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.476197   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:23.476205   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:23.476266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:23.509325   74485 cri.go:89] found id: ""
	I1105 19:15:23.509353   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.509363   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:23.509371   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:23.509427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:23.541880   74485 cri.go:89] found id: ""
	I1105 19:15:23.541912   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.541922   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:23.541929   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:23.541993   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:23.574204   74485 cri.go:89] found id: ""
	I1105 19:15:23.574236   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.574248   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:23.574256   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:23.574323   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:23.606865   74485 cri.go:89] found id: ""
	I1105 19:15:23.606896   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.606908   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:23.606918   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:23.606932   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:23.673771   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:23.673792   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:23.673803   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:23.753298   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:23.753335   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:23.792273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:23.792304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:23.843072   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:23.843110   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.356859   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:26.369417   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:26.369488   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:26.403611   74485 cri.go:89] found id: ""
	I1105 19:15:26.403639   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.403647   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:26.403653   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:26.403725   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:26.439891   74485 cri.go:89] found id: ""
	I1105 19:15:26.439924   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.439936   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:26.439943   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:26.439991   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:26.473502   74485 cri.go:89] found id: ""
	I1105 19:15:26.473542   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.473554   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:26.473561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:26.473640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:26.505666   74485 cri.go:89] found id: ""
	I1105 19:15:26.505695   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.505703   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:26.505710   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:26.505769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:26.539781   74485 cri.go:89] found id: ""
	I1105 19:15:26.539815   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.539827   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:26.539835   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:26.539911   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:26.574673   74485 cri.go:89] found id: ""
	I1105 19:15:26.574712   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.574721   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:26.574727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:26.574773   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:26.608410   74485 cri.go:89] found id: ""
	I1105 19:15:26.608433   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.608441   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:26.608446   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:26.608494   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:26.644036   74485 cri.go:89] found id: ""
	I1105 19:15:26.644065   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.644076   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:26.644087   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:26.644098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.718901   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:26.718937   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:26.758920   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:26.758953   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:26.811241   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:26.811277   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.824931   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:26.824961   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:26.891799   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:26.725062   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:27.724594   74141 pod_ready.go:82] duration metric: took 4m0.006622979s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:27.724627   74141 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1105 19:15:27.724644   74141 pod_ready.go:39] duration metric: took 4m0.807889519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:27.724663   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:15:27.724711   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:27.724769   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:27.771870   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:27.771897   74141 cri.go:89] found id: ""
	I1105 19:15:27.771906   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:27.771966   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.776484   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:27.776553   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:27.823529   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:27.823564   74141 cri.go:89] found id: ""
	I1105 19:15:27.823576   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:27.823638   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.828600   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:27.828685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:27.878206   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:27.878242   74141 cri.go:89] found id: ""
	I1105 19:15:27.878254   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:27.878317   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.882545   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:27.882640   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:27.920102   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:27.920127   74141 cri.go:89] found id: ""
	I1105 19:15:27.920137   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:27.920189   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.924516   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:27.924593   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:27.969493   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:27.969523   74141 cri.go:89] found id: ""
	I1105 19:15:27.969534   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:27.969589   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.973637   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:27.973724   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:28.014369   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.014396   74141 cri.go:89] found id: ""
	I1105 19:15:28.014405   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:28.014463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.019040   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:28.019112   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:28.056411   74141 cri.go:89] found id: ""
	I1105 19:15:28.056438   74141 logs.go:282] 0 containers: []
	W1105 19:15:28.056446   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:28.056452   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:28.056502   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:28.099541   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.099562   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.099566   74141 cri.go:89] found id: ""
	I1105 19:15:28.099573   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:28.099628   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.104144   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.108443   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:28.108465   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.153262   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:28.153302   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.197210   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:28.197237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:28.242915   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:28.242944   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:28.257468   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:28.257497   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:28.299795   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:28.299825   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:28.333983   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:28.334015   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:28.369174   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:28.369202   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:28.405838   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:28.405869   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:28.477842   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:28.477880   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:28.595832   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:28.595865   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:28.639146   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:28.639179   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.689519   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:28.689554   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.846977   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:28.847878   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:29.392417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:29.405249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:29.405331   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:29.437397   74485 cri.go:89] found id: ""
	I1105 19:15:29.437432   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.437443   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:29.437450   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:29.437504   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:29.469908   74485 cri.go:89] found id: ""
	I1105 19:15:29.469938   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.469946   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:29.469951   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:29.470008   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:29.502302   74485 cri.go:89] found id: ""
	I1105 19:15:29.502331   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.502339   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:29.502345   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:29.502391   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:29.534285   74485 cri.go:89] found id: ""
	I1105 19:15:29.534309   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.534317   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:29.534322   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:29.534373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:29.571918   74485 cri.go:89] found id: ""
	I1105 19:15:29.571962   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.571973   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:29.571983   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:29.572042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:29.605324   74485 cri.go:89] found id: ""
	I1105 19:15:29.605354   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.605365   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:29.605373   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:29.605435   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:29.640181   74485 cri.go:89] found id: ""
	I1105 19:15:29.640210   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.640218   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:29.640224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:29.640273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:29.671121   74485 cri.go:89] found id: ""
	I1105 19:15:29.671147   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.671155   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:29.671164   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:29.671174   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:29.750821   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:29.750856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:29.787452   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:29.787479   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:29.840413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:29.840459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:29.855540   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:29.855580   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:29.925849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:32.426016   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:32.438759   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:32.438824   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:32.476376   74485 cri.go:89] found id: ""
	I1105 19:15:32.476406   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.476416   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:32.476423   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:32.476490   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:32.512328   74485 cri.go:89] found id: ""
	I1105 19:15:32.512352   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.512360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:32.512365   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:32.512414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:32.546803   74485 cri.go:89] found id: ""
	I1105 19:15:32.546833   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.546844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:32.546851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:32.546925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:32.585904   74485 cri.go:89] found id: ""
	I1105 19:15:32.585934   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.585946   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:32.585953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:32.586014   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:32.620976   74485 cri.go:89] found id: ""
	I1105 19:15:32.621005   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.621012   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:32.621018   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:32.621082   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.668028   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:31.684024   74141 api_server.go:72] duration metric: took 4m12.496021782s to wait for apiserver process to appear ...
	I1105 19:15:31.684060   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:15:31.684105   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:31.684163   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:31.719462   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:31.719496   74141 cri.go:89] found id: ""
	I1105 19:15:31.719506   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:31.719559   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.723632   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:31.723702   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:31.761976   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:31.762001   74141 cri.go:89] found id: ""
	I1105 19:15:31.762010   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:31.762067   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.766066   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:31.766137   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:31.799673   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:31.799694   74141 cri.go:89] found id: ""
	I1105 19:15:31.799701   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:31.799753   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.803632   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:31.803714   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:31.841782   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:31.841808   74141 cri.go:89] found id: ""
	I1105 19:15:31.841818   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:31.841873   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.850409   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:31.850471   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:31.891932   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:31.891959   74141 cri.go:89] found id: ""
	I1105 19:15:31.891969   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:31.892026   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.896065   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:31.896125   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.932759   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:31.932781   74141 cri.go:89] found id: ""
	I1105 19:15:31.932788   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:31.932831   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.936611   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:31.936685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:31.971296   74141 cri.go:89] found id: ""
	I1105 19:15:31.971328   74141 logs.go:282] 0 containers: []
	W1105 19:15:31.971339   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:31.971348   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:31.971410   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:32.006153   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:32.006173   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.006177   74141 cri.go:89] found id: ""
	I1105 19:15:32.006184   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:32.006226   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.010159   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.013807   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.013831   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.084222   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:32.084273   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:32.127875   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:32.127928   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:32.173008   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:32.173041   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:32.235366   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.235402   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.714822   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:32.714861   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.750733   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.750761   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.796233   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.796264   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.809269   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.809296   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:32.931162   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:32.931196   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:32.968551   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:32.968578   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:33.008115   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:33.008152   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:33.046201   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:33.046237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:31.346652   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:33.347118   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:32.658958   74485 cri.go:89] found id: ""
	I1105 19:15:32.659006   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.659018   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:32.659026   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:32.659091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:32.694317   74485 cri.go:89] found id: ""
	I1105 19:15:32.694341   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.694349   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:32.694354   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:32.694403   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:32.728277   74485 cri.go:89] found id: ""
	I1105 19:15:32.728314   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.728327   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:32.728338   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.728352   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.815579   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.815615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.856776   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.856807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.909477   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.909518   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.923789   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.923817   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:32.997898   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:35.498040   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:35.511537   74485 kubeadm.go:597] duration metric: took 4m4.46832509s to restartPrimaryControlPlane
	W1105 19:15:35.511612   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:35.511644   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:35.586678   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:15:35.591512   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:15:35.592489   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:15:35.592507   74141 api_server.go:131] duration metric: took 3.908440367s to wait for apiserver health ...
	I1105 19:15:35.592514   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:15:35.592538   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:35.592589   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:35.636389   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.636408   74141 cri.go:89] found id: ""
	I1105 19:15:35.636416   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:35.636463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.640778   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:35.640839   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:35.676793   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:35.676818   74141 cri.go:89] found id: ""
	I1105 19:15:35.676828   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:35.676890   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.681596   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:35.681669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:35.721728   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:35.721754   74141 cri.go:89] found id: ""
	I1105 19:15:35.721763   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:35.721808   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.725619   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:35.725677   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:35.765348   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:35.765377   74141 cri.go:89] found id: ""
	I1105 19:15:35.765386   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:35.765439   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.769594   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:35.769669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:35.809427   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:35.809452   74141 cri.go:89] found id: ""
	I1105 19:15:35.809460   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:35.809505   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.814317   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:35.814376   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:35.853861   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:35.853882   74141 cri.go:89] found id: ""
	I1105 19:15:35.853890   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:35.853934   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.857734   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:35.857787   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:35.897791   74141 cri.go:89] found id: ""
	I1105 19:15:35.897816   74141 logs.go:282] 0 containers: []
	W1105 19:15:35.897824   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:35.897830   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:35.897887   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:35.940906   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:35.940940   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:35.940946   74141 cri.go:89] found id: ""
	I1105 19:15:35.940954   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:35.941006   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.945200   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.948860   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:35.948884   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.992660   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:35.992690   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:36.033586   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:36.033617   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:36.066599   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:36.066643   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:36.104895   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:36.104932   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:36.489747   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:36.489781   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:36.531923   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:36.531952   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:36.598718   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:36.598758   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:36.612969   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:36.612998   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:36.718535   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:36.718568   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:36.755636   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:36.755677   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:36.815561   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:36.815640   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:36.850878   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:36.850904   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:39.390699   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:15:39.390733   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.390738   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.390743   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.390747   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.390750   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.390753   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.390760   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.390764   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.390771   74141 system_pods.go:74] duration metric: took 3.798251189s to wait for pod list to return data ...
	I1105 19:15:39.390777   74141 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:15:39.393894   74141 default_sa.go:45] found service account: "default"
	I1105 19:15:39.393914   74141 default_sa.go:55] duration metric: took 3.132788ms for default service account to be created ...
	I1105 19:15:39.393929   74141 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:15:39.398455   74141 system_pods.go:86] 8 kube-system pods found
	I1105 19:15:39.398480   74141 system_pods.go:89] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.398485   74141 system_pods.go:89] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.398490   74141 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.398494   74141 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.398497   74141 system_pods.go:89] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.398501   74141 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.398508   74141 system_pods.go:89] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.398512   74141 system_pods.go:89] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.398520   74141 system_pods.go:126] duration metric: took 4.586494ms to wait for k8s-apps to be running ...
	I1105 19:15:39.398529   74141 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:15:39.398569   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.413878   74141 system_svc.go:56] duration metric: took 15.340417ms WaitForService to wait for kubelet
	I1105 19:15:39.413908   74141 kubeadm.go:582] duration metric: took 4m20.225910976s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:15:39.413936   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:15:39.416851   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:15:39.416870   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:15:39.416880   74141 node_conditions.go:105] duration metric: took 2.939584ms to run NodePressure ...
	I1105 19:15:39.416891   74141 start.go:241] waiting for startup goroutines ...
	I1105 19:15:39.416899   74141 start.go:246] waiting for cluster config update ...
	I1105 19:15:39.416911   74141 start.go:255] writing updated cluster config ...
	I1105 19:15:39.417211   74141 ssh_runner.go:195] Run: rm -f paused
	I1105 19:15:39.463773   74141 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:15:39.465688   74141 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-608095" cluster and "default" namespace by default
	I1105 19:15:39.702249   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.19058336s)
	I1105 19:15:39.702314   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.717966   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:39.728114   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:39.740451   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:39.740476   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:39.740519   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:39.751089   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:39.751150   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:39.761832   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:39.771841   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:39.771904   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:39.782332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.792379   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:39.792438   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.801625   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:39.811691   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:39.811740   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:39.821162   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:39.891377   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:15:39.891443   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:40.034176   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:40.034337   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:40.034476   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:15:40.211588   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:35.847491   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:38.346965   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.348252   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.213724   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:40.213838   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:40.213939   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:40.214048   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:40.214172   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:40.214266   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:40.214375   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:40.214478   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:40.214567   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:40.214687   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:40.214819   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:40.214884   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:40.214980   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:40.358606   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:40.632263   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:40.766570   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:40.885914   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:40.902379   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:40.903647   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:40.903716   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:41.040274   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:41.042093   74485 out.go:235]   - Booting up control plane ...
	I1105 19:15:41.042222   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:41.048448   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:41.058445   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:41.059466   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:41.062648   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:15:38.649673   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193536212s)
	I1105 19:15:38.649753   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:38.665214   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:38.674520   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:38.684078   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:38.684102   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:38.684151   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:38.693169   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:38.693239   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:38.702305   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:38.710796   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:38.710868   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:38.719716   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.728090   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:38.728143   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.737219   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:38.745625   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:38.745692   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:38.754684   73732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:38.914343   73732 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:15:42.847011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:44.851431   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:47.368221   73732 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:15:47.368296   73732 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:47.368405   73732 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:47.368552   73732 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:47.368686   73732 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:15:47.368787   73732 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:47.370333   73732 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:47.370429   73732 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:47.370529   73732 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:47.370650   73732 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:47.370763   73732 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:47.370900   73732 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:47.371009   73732 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:47.371110   73732 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:47.371198   73732 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:47.371312   73732 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:47.371431   73732 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:47.371494   73732 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:47.371573   73732 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:47.371656   73732 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:47.371725   73732 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:15:47.371797   73732 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:47.371893   73732 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:47.371976   73732 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:47.372074   73732 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:47.372160   73732 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:47.374386   73732 out.go:235]   - Booting up control plane ...
	I1105 19:15:47.374503   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:47.374622   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:47.374707   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:47.374838   73732 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:47.374950   73732 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:47.375046   73732 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:47.375226   73732 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:15:47.375367   73732 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:15:47.375450   73732 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.124171ms
	I1105 19:15:47.375549   73732 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:15:47.375647   73732 kubeadm.go:310] [api-check] The API server is healthy after 5.001431223s
	I1105 19:15:47.375804   73732 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:15:47.375968   73732 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:15:47.376055   73732 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:15:47.376321   73732 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-271881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:15:47.376412   73732 kubeadm.go:310] [bootstrap-token] Using token: 2xak8n.owgv6oncwawjarav
	I1105 19:15:47.377766   73732 out.go:235]   - Configuring RBAC rules ...
	I1105 19:15:47.377911   73732 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:15:47.378024   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:15:47.378138   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:15:47.378243   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:15:47.378337   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:15:47.378408   73732 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:15:47.378502   73732 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:15:47.378541   73732 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:15:47.378580   73732 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:15:47.378587   73732 kubeadm.go:310] 
	I1105 19:15:47.378635   73732 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:15:47.378645   73732 kubeadm.go:310] 
	I1105 19:15:47.378711   73732 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:15:47.378718   73732 kubeadm.go:310] 
	I1105 19:15:47.378760   73732 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:15:47.378813   73732 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:15:47.378856   73732 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:15:47.378860   73732 kubeadm.go:310] 
	I1105 19:15:47.378910   73732 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:15:47.378913   73732 kubeadm.go:310] 
	I1105 19:15:47.378955   73732 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:15:47.378959   73732 kubeadm.go:310] 
	I1105 19:15:47.379030   73732 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:15:47.379114   73732 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:15:47.379195   73732 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:15:47.379203   73732 kubeadm.go:310] 
	I1105 19:15:47.379320   73732 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:15:47.379427   73732 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:15:47.379442   73732 kubeadm.go:310] 
	I1105 19:15:47.379559   73732 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.379718   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:15:47.379762   73732 kubeadm.go:310] 	--control-plane 
	I1105 19:15:47.379770   73732 kubeadm.go:310] 
	I1105 19:15:47.379844   73732 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:15:47.379851   73732 kubeadm.go:310] 
	I1105 19:15:47.379977   73732 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.380150   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:15:47.380167   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:15:47.380174   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:15:47.381714   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:15:47.382944   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:15:47.394080   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:15:47.411715   73732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:15:47.411773   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.411821   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-271881 minikube.k8s.io/updated_at=2024_11_05T19_15_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=embed-certs-271881 minikube.k8s.io/primary=true
	I1105 19:15:47.439084   73732 ops.go:34] apiserver oom_adj: -16
	I1105 19:15:47.601691   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.348094   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:49.847296   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:48.102103   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:48.602767   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.101780   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.601826   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.101976   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.602763   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.102779   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.601930   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.102574   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.241636   73732 kubeadm.go:1113] duration metric: took 4.829922813s to wait for elevateKubeSystemPrivileges
	I1105 19:15:52.241680   73732 kubeadm.go:394] duration metric: took 5m2.866246993s to StartCluster
	I1105 19:15:52.241704   73732 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.241801   73732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:15:52.244409   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.244716   73732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:15:52.244789   73732 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:15:52.244893   73732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-271881"
	I1105 19:15:52.244914   73732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-271881"
	I1105 19:15:52.244911   73732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-271881"
	I1105 19:15:52.244933   73732 addons.go:69] Setting metrics-server=true in profile "embed-certs-271881"
	I1105 19:15:52.244941   73732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-271881"
	I1105 19:15:52.244954   73732 addons.go:234] Setting addon metrics-server=true in "embed-certs-271881"
	W1105 19:15:52.244965   73732 addons.go:243] addon metrics-server should already be in state true
	I1105 19:15:52.244998   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1105 19:15:52.244925   73732 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:15:52.245001   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245065   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245404   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245422   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245436   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245455   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245464   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245543   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.246341   73732 out.go:177] * Verifying Kubernetes components...
	I1105 19:15:52.247801   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:15:52.261802   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I1105 19:15:52.262325   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.262955   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.263159   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.263591   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.264367   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.264413   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.265696   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I1105 19:15:52.265941   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I1105 19:15:52.266161   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266322   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266776   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266782   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266800   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.266803   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.267185   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267224   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267353   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.267804   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.267846   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.271094   73732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-271881"
	W1105 19:15:52.271117   73732 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:15:52.271147   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.271509   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.271554   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.284180   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I1105 19:15:52.284456   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1105 19:15:52.284703   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.284925   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.285248   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285261   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285355   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285363   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285578   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285727   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285766   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.285862   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.287834   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.288259   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.290341   73732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:15:52.290346   73732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:15:52.290695   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I1105 19:15:52.291040   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.291464   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.291479   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.291776   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.291974   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:15:52.291994   73732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:15:52.292015   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292054   73732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.292067   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:15:52.292079   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292355   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.292400   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.295296   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295650   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.295675   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295701   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295797   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.295969   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296102   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296247   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.296272   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.296305   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.296582   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.296714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296848   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296947   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.314049   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I1105 19:15:52.314561   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.315148   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.315168   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.315884   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.316080   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.318146   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.318465   73732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.318478   73732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:15:52.318496   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.321312   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321825   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.321850   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321885   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.322095   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.322238   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.322397   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.453762   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:15:52.483722   73732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493492   73732 node_ready.go:49] node "embed-certs-271881" has status "Ready":"True"
	I1105 19:15:52.493519   73732 node_ready.go:38] duration metric: took 9.757528ms for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493530   73732 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:52.508208   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:15:52.577925   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.589366   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:15:52.589389   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:15:52.612570   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:15:52.612593   73732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:15:52.645851   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.647686   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:52.647713   73732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:15:52.668865   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:53.246894   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246918   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.246923   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246950   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247230   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247277   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247305   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247323   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247338   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247349   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247331   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247368   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247378   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247710   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247739   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247746   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247779   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247800   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247811   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.269143   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.269165   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.269465   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.269479   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.269483   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.494717   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.494741   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495080   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495100   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495114   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.495123   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495348   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.495394   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495414   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495427   73732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-271881"
	I1105 19:15:53.497126   73732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:15:52.347616   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:54.352434   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:53.498891   73732 addons.go:510] duration metric: took 1.254108253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:15:54.518219   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:57.015647   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:56.846198   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:58.847684   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:59.514759   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:01.514818   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:02.515124   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.515148   73732 pod_ready.go:82] duration metric: took 10.006914802s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.515158   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519864   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.519889   73732 pod_ready.go:82] duration metric: took 4.723101ms for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519900   73732 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524948   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.524970   73732 pod_ready.go:82] duration metric: took 5.063029ms for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524979   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529710   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.529739   73732 pod_ready.go:82] duration metric: took 4.753888ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529750   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534282   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.534301   73732 pod_ready.go:82] duration metric: took 4.544677ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534309   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912364   73732 pod_ready.go:93] pod "kube-proxy-nfxcj" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.912387   73732 pod_ready.go:82] duration metric: took 378.071939ms for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912397   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311793   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:03.311816   73732 pod_ready.go:82] duration metric: took 399.412502ms for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311822   73732 pod_ready.go:39] duration metric: took 10.818282425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:03.311836   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:16:03.311883   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:16:03.327913   73732 api_server.go:72] duration metric: took 11.083157176s to wait for apiserver process to appear ...
	I1105 19:16:03.327947   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:16:03.327968   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:16:03.334499   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:16:03.335530   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:16:03.335550   73732 api_server.go:131] duration metric: took 7.596072ms to wait for apiserver health ...
	I1105 19:16:03.335558   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:16:03.514782   73732 system_pods.go:59] 9 kube-system pods found
	I1105 19:16:03.514813   73732 system_pods.go:61] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.514820   73732 system_pods.go:61] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.514825   73732 system_pods.go:61] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.514830   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.514835   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.514840   73732 system_pods.go:61] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.514844   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.514854   73732 system_pods.go:61] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.514859   73732 system_pods.go:61] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.514868   73732 system_pods.go:74] duration metric: took 179.304519ms to wait for pod list to return data ...
	I1105 19:16:03.514877   73732 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:16:03.712690   73732 default_sa.go:45] found service account: "default"
	I1105 19:16:03.712719   73732 default_sa.go:55] duration metric: took 197.831177ms for default service account to be created ...
	I1105 19:16:03.712731   73732 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:16:03.916858   73732 system_pods.go:86] 9 kube-system pods found
	I1105 19:16:03.916893   73732 system_pods.go:89] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.916902   73732 system_pods.go:89] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.916908   73732 system_pods.go:89] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.916913   73732 system_pods.go:89] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.916918   73732 system_pods.go:89] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.916921   73732 system_pods.go:89] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.916924   73732 system_pods.go:89] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.916934   73732 system_pods.go:89] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.916941   73732 system_pods.go:89] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.916953   73732 system_pods.go:126] duration metric: took 204.215711ms to wait for k8s-apps to be running ...
	I1105 19:16:03.916963   73732 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:16:03.917019   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:03.931369   73732 system_svc.go:56] duration metric: took 14.397556ms WaitForService to wait for kubelet
	I1105 19:16:03.931407   73732 kubeadm.go:582] duration metric: took 11.686653516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:16:03.931454   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:16:04.111904   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:16:04.111928   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:16:04.111937   73732 node_conditions.go:105] duration metric: took 180.475073ms to run NodePressure ...
	I1105 19:16:04.111947   73732 start.go:241] waiting for startup goroutines ...
	I1105 19:16:04.111953   73732 start.go:246] waiting for cluster config update ...
	I1105 19:16:04.111962   73732 start.go:255] writing updated cluster config ...
	I1105 19:16:04.112197   73732 ssh_runner.go:195] Run: rm -f paused
	I1105 19:16:04.158775   73732 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:16:04.160801   73732 out.go:177] * Done! kubectl is now configured to use "embed-certs-271881" cluster and "default" namespace by default
	I1105 19:16:01.346039   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:03.346369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:05.846866   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:08.346383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:10.346570   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:12.347171   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:14.846335   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.840591   73496 pod_ready.go:82] duration metric: took 4m0.000143963s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	E1105 19:16:17.840620   73496 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:16:17.840649   73496 pod_ready.go:39] duration metric: took 4m11.022533189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:17.840682   73496 kubeadm.go:597] duration metric: took 4m18.432062793s to restartPrimaryControlPlane
	W1105 19:16:17.840732   73496 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:16:17.840755   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:16:21.064069   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:16:21.064607   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:21.064798   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:26.065202   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:26.065410   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:36.065932   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:36.066151   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:43.960239   73496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.119460606s)
	I1105 19:16:43.960324   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:43.986199   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:16:43.999287   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:16:44.013653   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:16:44.013675   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:16:44.013718   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:16:44.026073   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:16:44.026140   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:16:44.038723   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:16:44.050880   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:16:44.050957   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:16:44.061696   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.071739   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:16:44.072301   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.084030   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:16:44.093217   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:16:44.093275   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:16:44.102494   73496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:16:44.267623   73496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:16:52.534375   73496 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:16:52.534458   73496 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:16:52.534569   73496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:16:52.534704   73496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:16:52.534834   73496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:16:52.534930   73496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:16:52.536666   73496 out.go:235]   - Generating certificates and keys ...
	I1105 19:16:52.536759   73496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:16:52.536836   73496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:16:52.536911   73496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:16:52.536963   73496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:16:52.537060   73496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:16:52.537145   73496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:16:52.537232   73496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:16:52.537286   73496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:16:52.537361   73496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:16:52.537455   73496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:16:52.537500   73496 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:16:52.537578   73496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:16:52.537648   73496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:16:52.537725   73496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:16:52.537797   73496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:16:52.537905   73496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:16:52.537988   73496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:16:52.538075   73496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:16:52.538136   73496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:16:52.539588   73496 out.go:235]   - Booting up control plane ...
	I1105 19:16:52.539669   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:16:52.539743   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:16:52.539800   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:16:52.539885   73496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:16:52.539987   73496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:16:52.540057   73496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:16:52.540206   73496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:16:52.540300   73496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:16:52.540367   73496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733469ms
	I1105 19:16:52.540447   73496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:16:52.540528   73496 kubeadm.go:310] [api-check] The API server is healthy after 5.001962829s
	I1105 19:16:52.540651   73496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:16:52.540806   73496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:16:52.540899   73496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:16:52.541094   73496 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-459223 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:16:52.541164   73496 kubeadm.go:310] [bootstrap-token] Using token: f0bzzt.jihwqjda853aoxrb
	I1105 19:16:52.543528   73496 out.go:235]   - Configuring RBAC rules ...
	I1105 19:16:52.543658   73496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:16:52.543777   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:16:52.543942   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:16:52.544072   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:16:52.544222   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:16:52.544327   73496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:16:52.544453   73496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:16:52.544493   73496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:16:52.544536   73496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:16:52.544542   73496 kubeadm.go:310] 
	I1105 19:16:52.544593   73496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:16:52.544599   73496 kubeadm.go:310] 
	I1105 19:16:52.544687   73496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:16:52.544701   73496 kubeadm.go:310] 
	I1105 19:16:52.544739   73496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:16:52.544795   73496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:16:52.544855   73496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:16:52.544881   73496 kubeadm.go:310] 
	I1105 19:16:52.544958   73496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:16:52.544971   73496 kubeadm.go:310] 
	I1105 19:16:52.545039   73496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:16:52.545049   73496 kubeadm.go:310] 
	I1105 19:16:52.545111   73496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:16:52.545193   73496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:16:52.545251   73496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:16:52.545257   73496 kubeadm.go:310] 
	I1105 19:16:52.545324   73496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:16:52.545403   73496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:16:52.545409   73496 kubeadm.go:310] 
	I1105 19:16:52.545480   73496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.545605   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:16:52.545638   73496 kubeadm.go:310] 	--control-plane 
	I1105 19:16:52.545648   73496 kubeadm.go:310] 
	I1105 19:16:52.545779   73496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:16:52.545794   73496 kubeadm.go:310] 
	I1105 19:16:52.545903   73496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.546059   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:16:52.546074   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:16:52.546083   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:16:52.548357   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:16:52.549732   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:16:52.560406   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:16:52.577268   73496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:16:52.577334   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:52.577373   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-459223 minikube.k8s.io/updated_at=2024_11_05T19_16_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=no-preload-459223 minikube.k8s.io/primary=true
	I1105 19:16:52.776299   73496 ops.go:34] apiserver oom_adj: -16
	I1105 19:16:52.776456   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.276618   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.777474   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.276726   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.777004   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.276725   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.777410   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.893941   73496 kubeadm.go:1113] duration metric: took 3.316665512s to wait for elevateKubeSystemPrivileges
	I1105 19:16:55.893984   73496 kubeadm.go:394] duration metric: took 4m56.532038314s to StartCluster
	I1105 19:16:55.894007   73496 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.894104   73496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:16:55.896620   73496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.896934   73496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:16:55.897120   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:16:55.897056   73496 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:16:55.897166   73496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-459223"
	I1105 19:16:55.897176   73496 addons.go:69] Setting default-storageclass=true in profile "no-preload-459223"
	I1105 19:16:55.897186   73496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-459223"
	I1105 19:16:55.897193   73496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-459223"
	I1105 19:16:55.897211   73496 addons.go:69] Setting metrics-server=true in profile "no-preload-459223"
	I1105 19:16:55.897231   73496 addons.go:234] Setting addon metrics-server=true in "no-preload-459223"
	W1105 19:16:55.897243   73496 addons.go:243] addon metrics-server should already be in state true
	I1105 19:16:55.897271   73496 host.go:66] Checking if "no-preload-459223" exists ...
	W1105 19:16:55.897195   73496 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:16:55.897323   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.897599   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897642   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897705   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897754   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897711   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897811   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.898341   73496 out.go:177] * Verifying Kubernetes components...
	I1105 19:16:55.899778   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:16:55.914218   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1105 19:16:55.914305   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1105 19:16:55.914726   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.914837   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.915283   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915305   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915391   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915418   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915642   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915757   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915804   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.916323   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.916367   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.916858   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1105 19:16:55.917296   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.917805   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.917832   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.918156   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.918678   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.918720   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.919527   73496 addons.go:234] Setting addon default-storageclass=true in "no-preload-459223"
	W1105 19:16:55.919549   73496 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:16:55.919576   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.919954   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.919996   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.932547   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I1105 19:16:55.933026   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.933588   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.933601   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.933918   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.934153   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.936094   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.937415   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I1105 19:16:55.937800   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.937812   73496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:16:55.938312   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.938324   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.938420   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I1105 19:16:55.938661   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.938816   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.938867   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:16:55.938894   73496 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:16:55.938918   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.939014   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.939350   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.939362   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.939855   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.940281   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.940310   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.940959   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.942661   73496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:16:55.942797   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943216   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.943392   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943422   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.943588   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.943842   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.944078   73496 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:55.944083   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.944096   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:16:55.944114   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.947574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.947767   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.947789   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.948125   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.948249   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.948343   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.948424   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.987691   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I1105 19:16:55.988131   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.988714   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.988739   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.989127   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.989325   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.991207   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.991453   73496 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:55.991472   73496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:16:55.991492   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.994362   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994800   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.994846   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994938   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.995145   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.995315   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.996088   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:56.109142   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:16:56.126382   73496 node_ready.go:35] waiting up to 6m0s for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138050   73496 node_ready.go:49] node "no-preload-459223" has status "Ready":"True"
	I1105 19:16:56.138076   73496 node_ready.go:38] duration metric: took 11.661265ms for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138087   73496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:56.143325   73496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:56.230205   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:16:56.230228   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:16:56.232603   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:56.259360   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:16:56.259388   73496 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:16:56.268694   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:56.321334   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:56.321364   73496 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:16:56.387409   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:57.010417   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010441   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010496   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010522   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010748   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.010795   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010804   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010812   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010818   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010817   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010830   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010838   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010843   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.011143   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011147   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011205   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011221   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.011209   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011298   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074127   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.074148   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.074476   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.074543   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074508   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.135875   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.135898   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136259   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136280   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136278   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136291   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.136308   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136703   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136747   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136757   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136767   73496 addons.go:475] Verifying addon metrics-server=true in "no-preload-459223"
	I1105 19:16:57.138699   73496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:16:56.066834   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:56.067140   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:57.140755   73496 addons.go:510] duration metric: took 1.243699533s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:16:58.154376   73496 pod_ready.go:103] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:17:00.149838   73496 pod_ready.go:93] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:00.149864   73496 pod_ready.go:82] duration metric: took 4.006514005s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:00.149876   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156460   73496 pod_ready.go:93] pod "kube-apiserver-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.156486   73496 pod_ready.go:82] duration metric: took 1.006602068s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156499   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160598   73496 pod_ready.go:93] pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.160618   73496 pod_ready.go:82] duration metric: took 4.110322ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160631   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164461   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.164482   73496 pod_ready.go:82] duration metric: took 3.842329ms for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164492   73496 pod_ready.go:39] duration metric: took 5.026393011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:17:01.164509   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:17:01.164566   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:17:01.183307   73496 api_server.go:72] duration metric: took 5.286331754s to wait for apiserver process to appear ...
	I1105 19:17:01.183338   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:17:01.183357   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:17:01.189083   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:17:01.190439   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:17:01.190469   73496 api_server.go:131] duration metric: took 7.123058ms to wait for apiserver health ...
	I1105 19:17:01.190479   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:17:01.198820   73496 system_pods.go:59] 9 kube-system pods found
	I1105 19:17:01.198854   73496 system_pods.go:61] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198862   73496 system_pods.go:61] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198869   73496 system_pods.go:61] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.198873   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.198879   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.198883   73496 system_pods.go:61] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.198887   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.198893   73496 system_pods.go:61] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.198896   73496 system_pods.go:61] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.198903   73496 system_pods.go:74] duration metric: took 8.418414ms to wait for pod list to return data ...
	I1105 19:17:01.198913   73496 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:17:01.202229   73496 default_sa.go:45] found service account: "default"
	I1105 19:17:01.202251   73496 default_sa.go:55] duration metric: took 3.332652ms for default service account to be created ...
	I1105 19:17:01.202260   73496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:17:01.208774   73496 system_pods.go:86] 9 kube-system pods found
	I1105 19:17:01.208803   73496 system_pods.go:89] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208811   73496 system_pods.go:89] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208817   73496 system_pods.go:89] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.208821   73496 system_pods.go:89] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.208825   73496 system_pods.go:89] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.208828   73496 system_pods.go:89] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.208833   73496 system_pods.go:89] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.208838   73496 system_pods.go:89] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.208842   73496 system_pods.go:89] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.208848   73496 system_pods.go:126] duration metric: took 6.584071ms to wait for k8s-apps to be running ...
	I1105 19:17:01.208856   73496 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:17:01.208898   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:01.225005   73496 system_svc.go:56] duration metric: took 16.138051ms WaitForService to wait for kubelet
	I1105 19:17:01.225038   73496 kubeadm.go:582] duration metric: took 5.328067688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:17:01.225062   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:17:01.347771   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:17:01.347799   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:17:01.347813   73496 node_conditions.go:105] duration metric: took 122.746343ms to run NodePressure ...
	I1105 19:17:01.347826   73496 start.go:241] waiting for startup goroutines ...
	I1105 19:17:01.347834   73496 start.go:246] waiting for cluster config update ...
	I1105 19:17:01.347846   73496 start.go:255] writing updated cluster config ...
	I1105 19:17:01.348126   73496 ssh_runner.go:195] Run: rm -f paused
	I1105 19:17:01.396396   73496 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:17:01.398528   73496 out.go:177] * Done! kubectl is now configured to use "no-preload-459223" cluster and "default" namespace by default
	I1105 19:17:36.069129   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:17:36.069396   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:17:36.069426   74485 kubeadm.go:310] 
	I1105 19:17:36.069489   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:17:36.069572   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:17:36.069591   74485 kubeadm.go:310] 
	I1105 19:17:36.069638   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:17:36.069699   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:17:36.069843   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:17:36.069852   74485 kubeadm.go:310] 
	I1105 19:17:36.069967   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:17:36.070017   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:17:36.070067   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:17:36.070074   74485 kubeadm.go:310] 
	I1105 19:17:36.070216   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:17:36.070328   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:17:36.070345   74485 kubeadm.go:310] 
	I1105 19:17:36.070486   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:17:36.070622   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:17:36.070690   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:17:36.070758   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:17:36.070767   74485 kubeadm.go:310] 
	I1105 19:17:36.071471   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:17:36.071558   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:17:36.071652   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1105 19:17:36.071791   74485 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1105 19:17:36.071838   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:17:36.527864   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:36.543211   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:17:36.552656   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:17:36.552676   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:17:36.552734   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:17:36.562296   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:17:36.562360   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:17:36.571759   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:17:36.580534   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:17:36.580586   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:17:36.590320   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.599165   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:17:36.599235   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.608340   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:17:36.616935   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:17:36.616986   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:17:36.625948   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:17:36.843267   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:19:32.770686   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:19:32.770828   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1105 19:19:32.772504   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:19:32.772564   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:19:32.772656   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:19:32.772784   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:19:32.772893   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:19:32.772971   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:19:32.774648   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:19:32.774726   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:19:32.774804   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:19:32.774902   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:19:32.775012   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:19:32.775144   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:19:32.775223   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:19:32.775307   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:19:32.775397   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:19:32.775487   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:19:32.775597   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:19:32.775651   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:19:32.775728   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:19:32.775796   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:19:32.775864   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:19:32.775961   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:19:32.776041   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:19:32.776175   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:19:32.776281   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:19:32.776330   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:19:32.776417   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:19:32.777837   74485 out.go:235]   - Booting up control plane ...
	I1105 19:19:32.777940   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:19:32.778032   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:19:32.778134   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:19:32.778248   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:19:32.778489   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:19:32.778563   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:19:32.778652   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.778960   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779080   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779302   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779399   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779663   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779766   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779990   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780051   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.780241   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780260   74485 kubeadm.go:310] 
	I1105 19:19:32.780325   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:19:32.780381   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:19:32.780391   74485 kubeadm.go:310] 
	I1105 19:19:32.780438   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:19:32.780486   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:19:32.780627   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:19:32.780639   74485 kubeadm.go:310] 
	I1105 19:19:32.780748   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:19:32.780790   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:19:32.780819   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:19:32.780825   74485 kubeadm.go:310] 
	I1105 19:19:32.780961   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:19:32.781048   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:19:32.781055   74485 kubeadm.go:310] 
	I1105 19:19:32.781144   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:19:32.781225   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:19:32.781293   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:19:32.781394   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:19:32.781475   74485 kubeadm.go:394] duration metric: took 8m1.792270232s to StartCluster
	I1105 19:19:32.781485   74485 kubeadm.go:310] 
	I1105 19:19:32.781522   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:19:32.781589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:19:32.825435   74485 cri.go:89] found id: ""
	I1105 19:19:32.825465   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.825475   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:19:32.825482   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:19:32.825543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:19:32.859245   74485 cri.go:89] found id: ""
	I1105 19:19:32.859275   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.859286   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:19:32.859293   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:19:32.859355   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:19:32.890801   74485 cri.go:89] found id: ""
	I1105 19:19:32.890833   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.890844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:19:32.890851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:19:32.890919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:19:32.925244   74485 cri.go:89] found id: ""
	I1105 19:19:32.925273   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.925280   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:19:32.925287   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:19:32.925352   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:19:32.959091   74485 cri.go:89] found id: ""
	I1105 19:19:32.959118   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.959129   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:19:32.959137   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:19:32.959191   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:19:32.990230   74485 cri.go:89] found id: ""
	I1105 19:19:32.990264   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.990276   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:19:32.990284   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:19:32.990343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:19:33.027461   74485 cri.go:89] found id: ""
	I1105 19:19:33.027494   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.027505   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:19:33.027512   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:19:33.027574   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:19:33.070819   74485 cri.go:89] found id: ""
	I1105 19:19:33.070847   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.070858   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:19:33.070869   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:19:33.070883   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:19:33.122580   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:19:33.122615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:19:33.136015   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:19:33.136043   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:19:33.213727   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:19:33.213750   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:19:33.213762   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:19:33.324287   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:19:33.324333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1105 19:19:33.384732   74485 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1105 19:19:33.384785   74485 out.go:270] * 
	W1105 19:19:33.384844   74485 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.384857   74485 out.go:270] * 
	W1105 19:19:33.385632   74485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:19:33.388860   74485 out.go:201] 
	W1105 19:19:33.390328   74485 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.390366   74485 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1105 19:19:33.390393   74485 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1105 19:19:33.391785   74485 out.go:201] 
	
	
	==> CRI-O <==
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.212850186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834375212830451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13132ed5-a07b-451f-adbe-95359c8be0ba name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.213392186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11894ea3-645e-4a72-b143-9d7148affd52 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.213470533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11894ea3-645e-4a72-b143-9d7148affd52 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.213504131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=11894ea3-645e-4a72-b143-9d7148affd52 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.247843785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fef1e632-b5ea-4b3e-a02d-56fc76124d03 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.247933844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fef1e632-b5ea-4b3e-a02d-56fc76124d03 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.249066553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f7ece4a-294e-433e-9fe5-0f7cc1700d7f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.249497462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834375249477957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f7ece4a-294e-433e-9fe5-0f7cc1700d7f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.250017693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=784aa8ff-ab26-408e-9d04-d853bd45c7b1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.250072306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=784aa8ff-ab26-408e-9d04-d853bd45c7b1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.250103922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=784aa8ff-ab26-408e-9d04-d853bd45c7b1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.280099175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e61f6670-d1b7-40b7-a78e-a8fab8a38a84 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.280224349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e61f6670-d1b7-40b7-a78e-a8fab8a38a84 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.281305984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=379c31d3-51fc-43c4-93d0-4ba49d81e207 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.281654221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834375281632712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=379c31d3-51fc-43c4-93d0-4ba49d81e207 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.282105530Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22b3dc94-2966-40ff-8d0e-ea1e84207a04 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.282201781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22b3dc94-2966-40ff-8d0e-ea1e84207a04 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.282240253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=22b3dc94-2966-40ff-8d0e-ea1e84207a04 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.312994300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f4583b2-7916-4958-8e8a-b474f9c66470 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.313162314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f4583b2-7916-4958-8e8a-b474f9c66470 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.314481568Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24900c28-49ca-4ec9-bd9e-bec220e46667 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.314845719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834375314827072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24900c28-49ca-4ec9-bd9e-bec220e46667 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.315352404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae37f4eb-bc1f-48e2-9927-31358007946f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.315413310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae37f4eb-bc1f-48e2-9927-31358007946f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:19:35 old-k8s-version-567666 crio[622]: time="2024-11-05 19:19:35.315448308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ae37f4eb-bc1f-48e2-9927-31358007946f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 5 19:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055631] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039673] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.010642] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.961684] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543338] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.991220] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +0.059812] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.048972] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.214500] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.145320] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.257311] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +6.641170] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[  +0.060122] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.800603] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[ +13.119531] kauditd_printk_skb: 46 callbacks suppressed
	[Nov 5 19:15] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Nov 5 19:17] systemd-fstab-generator[5393]: Ignoring "noauto" option for root device
	[  +0.071837] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:19:35 up 8 min,  0 users,  load average: 0.01, 0.05, 0.03
	Linux old-k8s-version-567666 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]: created by net/http.(*Transport).queueForDial
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]: goroutine 159 [runnable]:
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]: net.cgoLookupIP(0x4f7fdc0, 0xc000876240, 0x48ab5d6, 0x3, 0xc000b88930, 0x1f, 0x10, 0x7fdf48274698, 0xc000b90bd0, 0x7fdf48274698, ...)
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]:         /usr/local/go/src/net/cgo_unix.go:229 +0x199
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]: net.(*Resolver).lookupIP(0x70c5740, 0x4f7fdc0, 0xc000876240, 0x48ab5d6, 0x3, 0xc000b88930, 0x1f, 0x0, 0x4a707e8, 0x0, ...)
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]:         /usr/local/go/src/net/lookup_unix.go:96 +0x187
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]: net.glob..func1(0x4f7fdc0, 0xc000876240, 0xc000b90e80, 0x48ab5d6, 0x3, 0xc000b88930, 0x1f, 0xc000052030, 0x0, 0xc000bbc240, ...)
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]:         /usr/local/go/src/net/hook.go:23 +0x72
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]: net.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]:         /usr/local/go/src/net/lookup.go:293 +0xb9
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000a8b2c0, 0xc000b88960, 0x23, 0xc000876280)
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]: created by internal/singleflight.(*Group).DoChan
	Nov 05 19:19:32 old-k8s-version-567666 kubelet[5576]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Nov 05 19:19:32 old-k8s-version-567666 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 05 19:19:32 old-k8s-version-567666 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 05 19:19:33 old-k8s-version-567666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Nov 05 19:19:33 old-k8s-version-567666 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 05 19:19:33 old-k8s-version-567666 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 05 19:19:33 old-k8s-version-567666 kubelet[5632]: I1105 19:19:33.393385    5632 server.go:416] Version: v1.20.0
	Nov 05 19:19:33 old-k8s-version-567666 kubelet[5632]: I1105 19:19:33.393687    5632 server.go:837] Client rotation is on, will bootstrap in background
	Nov 05 19:19:33 old-k8s-version-567666 kubelet[5632]: I1105 19:19:33.396714    5632 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 05 19:19:33 old-k8s-version-567666 kubelet[5632]: W1105 19:19:33.398536    5632 manager.go:159] Cannot detect current cgroup on cgroup v2
	Nov 05 19:19:33 old-k8s-version-567666 kubelet[5632]: I1105 19:19:33.398538    5632 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 2 (238.506273ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-567666" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (704.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1105 19:15:52.080589   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-11-05 19:24:40.003658252 +0000 UTC m=+6213.676382938
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-608095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-608095 logs -n 25: (1.912023169s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-929548 sudo cat                              | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo find                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo crio                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-929548                                       | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-537175 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | disable-driver-mounts-537175                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:04 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-459223             | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-271881            | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-608095  | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC | 05 Nov 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-459223                  | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-271881                 | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-567666        | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-608095       | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:15 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-567666             | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 19:07:52.649090   74485 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:07:52.649200   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649205   74485 out.go:358] Setting ErrFile to fd 2...
	I1105 19:07:52.649210   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649374   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:07:52.649909   74485 out.go:352] Setting JSON to false
	I1105 19:07:52.650785   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6615,"bootTime":1730827058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:07:52.650878   74485 start.go:139] virtualization: kvm guest
	I1105 19:07:52.652866   74485 out.go:177] * [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:07:52.654107   74485 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:07:52.654107   74485 notify.go:220] Checking for updates...
	I1105 19:07:52.655282   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:07:52.656379   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:07:52.657451   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:07:52.658694   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:07:52.659835   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:07:52.661251   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:07:52.661622   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.661660   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.677005   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I1105 19:07:52.677521   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.678096   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.678118   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.678489   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.678735   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.680466   74485 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1105 19:07:52.681734   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:07:52.682087   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.682139   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.697071   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1105 19:07:52.697503   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.697958   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.697980   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.698259   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.698439   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.732962   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 19:07:52.734079   74485 start.go:297] selected driver: kvm2
	I1105 19:07:52.734094   74485 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.734209   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:07:52.734912   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.735038   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:07:52.750214   74485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:07:52.750609   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:07:52.750641   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:07:52.750697   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:07:52.750745   74485 start.go:340] cluster config:
	{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.750877   74485 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.753288   74485 out.go:177] * Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	I1105 19:07:50.739209   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:53.811246   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:52.754354   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:07:52.754407   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 19:07:52.754425   74485 cache.go:56] Caching tarball of preloaded images
	I1105 19:07:52.754503   74485 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:07:52.754515   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 19:07:52.754610   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:07:52.754817   74485 start.go:360] acquireMachinesLock for old-k8s-version-567666: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:07:59.891257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:02.963247   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:09.043263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:12.115289   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:18.195275   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:21.267213   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:27.347251   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:30.419240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:36.499291   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:39.571255   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:45.651258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:48.723262   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:54.803265   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:57.875236   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:03.955241   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:07.027229   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:13.107258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:16.179257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:22.259227   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:25.331263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:31.411234   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:34.483240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:40.563258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:43.635253   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:49.715287   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:52.787276   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:58.867242   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:01.939296   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:08.019268   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:11.091350   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:17.171266   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:20.243245   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:23.247511   73732 start.go:364] duration metric: took 4m30.277290481s to acquireMachinesLock for "embed-certs-271881"
	I1105 19:10:23.247565   73732 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:23.247590   73732 fix.go:54] fixHost starting: 
	I1105 19:10:23.248173   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:23.248235   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:23.263573   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I1105 19:10:23.264016   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:23.264437   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:10:23.264461   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:23.264888   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:23.265122   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:23.265311   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:10:23.267000   73732 fix.go:112] recreateIfNeeded on embed-certs-271881: state=Stopped err=<nil>
	I1105 19:10:23.267031   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	W1105 19:10:23.267183   73732 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:23.269188   73732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-271881" ...
	I1105 19:10:23.244961   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:23.245021   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245327   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:10:23.245352   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245536   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:10:23.247352   73496 machine.go:96] duration metric: took 4m37.425023044s to provisionDockerMachine
	I1105 19:10:23.247393   73496 fix.go:56] duration metric: took 4m37.446801616s for fixHost
	I1105 19:10:23.247400   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 4m37.446835698s
	W1105 19:10:23.247424   73496 start.go:714] error starting host: provision: host is not running
	W1105 19:10:23.247522   73496 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1105 19:10:23.247534   73496 start.go:729] Will try again in 5 seconds ...
	I1105 19:10:23.270443   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Start
	I1105 19:10:23.270681   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring networks are active...
	I1105 19:10:23.271552   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network default is active
	I1105 19:10:23.271924   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network mk-embed-certs-271881 is active
	I1105 19:10:23.272243   73732 main.go:141] libmachine: (embed-certs-271881) Getting domain xml...
	I1105 19:10:23.273027   73732 main.go:141] libmachine: (embed-certs-271881) Creating domain...
	I1105 19:10:24.503219   73732 main.go:141] libmachine: (embed-certs-271881) Waiting to get IP...
	I1105 19:10:24.504067   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.504444   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.504503   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.504415   75020 retry.go:31] will retry after 194.539819ms: waiting for machine to come up
	I1105 19:10:24.701086   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.701552   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.701579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.701501   75020 retry.go:31] will retry after 361.371677ms: waiting for machine to come up
	I1105 19:10:25.064078   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.064484   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.064512   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.064433   75020 retry.go:31] will retry after 442.206433ms: waiting for machine to come up
	I1105 19:10:25.507981   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.508380   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.508405   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.508338   75020 retry.go:31] will retry after 573.453662ms: waiting for machine to come up
	I1105 19:10:26.083299   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.083727   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.083753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.083670   75020 retry.go:31] will retry after 686.210957ms: waiting for machine to come up
	I1105 19:10:26.771637   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.772070   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.772112   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.772062   75020 retry.go:31] will retry after 685.825223ms: waiting for machine to come up
	I1105 19:10:27.459230   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:27.459652   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:27.459677   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:27.459616   75020 retry.go:31] will retry after 1.167971852s: waiting for machine to come up
	I1105 19:10:28.247729   73496 start.go:360] acquireMachinesLock for no-preload-459223: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:10:28.629194   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:28.629526   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:28.629549   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:28.629488   75020 retry.go:31] will retry after 1.180980288s: waiting for machine to come up
	I1105 19:10:29.812048   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:29.812445   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:29.812475   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:29.812390   75020 retry.go:31] will retry after 1.527253183s: waiting for machine to come up
	I1105 19:10:31.342147   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:31.342519   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:31.342546   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:31.342467   75020 retry.go:31] will retry after 1.597485878s: waiting for machine to come up
	I1105 19:10:32.942141   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:32.942459   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:32.942505   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:32.942431   75020 retry.go:31] will retry after 2.416793509s: waiting for machine to come up
	I1105 19:10:35.360354   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:35.360717   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:35.360743   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:35.360674   75020 retry.go:31] will retry after 3.193637492s: waiting for machine to come up
	I1105 19:10:38.556294   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:38.556744   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:38.556775   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:38.556673   75020 retry.go:31] will retry after 3.819760443s: waiting for machine to come up
	I1105 19:10:42.380607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381140   73732 main.go:141] libmachine: (embed-certs-271881) Found IP for machine: 192.168.39.58
	I1105 19:10:42.381172   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has current primary IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381196   73732 main.go:141] libmachine: (embed-certs-271881) Reserving static IP address...
	I1105 19:10:42.381607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.381634   73732 main.go:141] libmachine: (embed-certs-271881) Reserved static IP address: 192.168.39.58
	I1105 19:10:42.381647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | skip adding static IP to network mk-embed-certs-271881 - found existing host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"}
	I1105 19:10:42.381671   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Getting to WaitForSSH function...
	I1105 19:10:42.381686   73732 main.go:141] libmachine: (embed-certs-271881) Waiting for SSH to be available...
	I1105 19:10:42.383908   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384306   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.384333   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384427   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH client type: external
	I1105 19:10:42.384458   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa (-rw-------)
	I1105 19:10:42.384486   73732 main.go:141] libmachine: (embed-certs-271881) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:10:42.384502   73732 main.go:141] libmachine: (embed-certs-271881) DBG | About to run SSH command:
	I1105 19:10:42.384510   73732 main.go:141] libmachine: (embed-certs-271881) DBG | exit 0
	I1105 19:10:42.506807   73732 main.go:141] libmachine: (embed-certs-271881) DBG | SSH cmd err, output: <nil>: 
	I1105 19:10:42.507217   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetConfigRaw
	I1105 19:10:42.507868   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.510314   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.510680   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510936   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/config.json ...
	I1105 19:10:42.511183   73732 machine.go:93] provisionDockerMachine start ...
	I1105 19:10:42.511203   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:42.511426   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.513759   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514111   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.514144   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514290   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.514473   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514654   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514827   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.514979   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.515191   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.515202   73732 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:10:42.619241   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:10:42.619273   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619517   73732 buildroot.go:166] provisioning hostname "embed-certs-271881"
	I1105 19:10:42.619555   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619735   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.622695   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623117   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.623146   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623304   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.623465   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623632   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623825   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.623957   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.624122   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.624135   73732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-271881 && echo "embed-certs-271881" | sudo tee /etc/hostname
	I1105 19:10:42.740722   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-271881
	
	I1105 19:10:42.740749   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.743579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.743922   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.743945   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.744160   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.744343   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744470   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.744756   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.744950   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.744972   73732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-271881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-271881/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-271881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:10:42.854869   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:42.854898   73732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:10:42.854926   73732 buildroot.go:174] setting up certificates
	I1105 19:10:42.854940   73732 provision.go:84] configureAuth start
	I1105 19:10:42.854948   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.855222   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.857913   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858228   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.858252   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858440   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.860753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861041   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.861062   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861222   73732 provision.go:143] copyHostCerts
	I1105 19:10:42.861274   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:10:42.861291   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:10:42.861385   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:10:42.861543   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:10:42.861556   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:10:42.861595   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:10:42.861671   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:10:42.861681   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:10:42.861713   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:10:42.861781   73732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.embed-certs-271881 san=[127.0.0.1 192.168.39.58 embed-certs-271881 localhost minikube]
	I1105 19:10:43.659372   74141 start.go:364] duration metric: took 3m39.006624915s to acquireMachinesLock for "default-k8s-diff-port-608095"
	I1105 19:10:43.659450   74141 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:43.659458   74141 fix.go:54] fixHost starting: 
	I1105 19:10:43.659814   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:43.659874   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:43.677604   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I1105 19:10:43.678132   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:43.678624   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:10:43.678649   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:43.679047   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:43.679250   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:10:43.679407   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:10:43.681036   74141 fix.go:112] recreateIfNeeded on default-k8s-diff-port-608095: state=Stopped err=<nil>
	I1105 19:10:43.681063   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	W1105 19:10:43.681194   74141 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:43.683110   74141 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-608095" ...
	I1105 19:10:43.684451   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Start
	I1105 19:10:43.684639   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring networks are active...
	I1105 19:10:43.685436   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network default is active
	I1105 19:10:43.685983   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network mk-default-k8s-diff-port-608095 is active
	I1105 19:10:43.686398   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Getting domain xml...
	I1105 19:10:43.687143   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Creating domain...
	I1105 19:10:43.044648   73732 provision.go:177] copyRemoteCerts
	I1105 19:10:43.044703   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:10:43.044730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.047120   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047506   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.047538   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047717   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.047886   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.048037   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.048186   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.129098   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:10:43.154632   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1105 19:10:43.179681   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 19:10:43.205598   73732 provision.go:87] duration metric: took 350.648117ms to configureAuth
	I1105 19:10:43.205622   73732 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:10:43.205822   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:10:43.205900   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.208446   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.208763   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.208799   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.209006   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.209190   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209489   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.209611   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.209828   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.209850   73732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:10:43.431540   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:10:43.431569   73732 machine.go:96] duration metric: took 920.370689ms to provisionDockerMachine
	I1105 19:10:43.431582   73732 start.go:293] postStartSetup for "embed-certs-271881" (driver="kvm2")
	I1105 19:10:43.431595   73732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:10:43.431617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.431912   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:10:43.431940   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.434821   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435170   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.435193   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435338   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.435532   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.435714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.435851   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.517391   73732 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:10:43.521532   73732 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:10:43.521553   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:10:43.521632   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:10:43.521721   73732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:10:43.521839   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:10:43.531045   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:43.556596   73732 start.go:296] duration metric: took 125.000692ms for postStartSetup
	I1105 19:10:43.556634   73732 fix.go:56] duration metric: took 20.309059136s for fixHost
	I1105 19:10:43.556663   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.558888   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559181   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.559220   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.559531   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559674   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.559934   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.560096   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.560106   73732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:10:43.659219   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833843.637801657
	
	I1105 19:10:43.659240   73732 fix.go:216] guest clock: 1730833843.637801657
	I1105 19:10:43.659247   73732 fix.go:229] Guest: 2024-11-05 19:10:43.637801657 +0000 UTC Remote: 2024-11-05 19:10:43.556637855 +0000 UTC m=+290.729857868 (delta=81.163802ms)
	I1105 19:10:43.659284   73732 fix.go:200] guest clock delta is within tolerance: 81.163802ms
	I1105 19:10:43.659290   73732 start.go:83] releasing machines lock for "embed-certs-271881", held for 20.411743975s
	I1105 19:10:43.659324   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.659589   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:43.662581   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663025   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.663058   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663214   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663907   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.664017   73732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:10:43.664057   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.664108   73732 ssh_runner.go:195] Run: cat /version.json
	I1105 19:10:43.664131   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.666998   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667059   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667365   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667395   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667424   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667438   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667543   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667638   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667897   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667968   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667996   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.668078   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.775067   73732 ssh_runner.go:195] Run: systemctl --version
	I1105 19:10:43.780892   73732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:10:43.919564   73732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:10:43.926362   73732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:10:43.926422   73732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:10:43.942359   73732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:10:43.942378   73732 start.go:495] detecting cgroup driver to use...
	I1105 19:10:43.942450   73732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:10:43.964650   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:10:43.980651   73732 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:10:43.980717   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:10:43.993988   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:10:44.007440   73732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:10:44.132040   73732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:10:44.314220   73732 docker.go:233] disabling docker service ...
	I1105 19:10:44.314294   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:10:44.337362   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:10:44.351277   73732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:10:44.485105   73732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:10:44.621596   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:10:44.636254   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:10:44.656530   73732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:10:44.656595   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.667156   73732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:10:44.667237   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.682233   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.692814   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.704688   73732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:10:44.721662   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.738629   73732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.754944   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.765089   73732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:10:44.774147   73732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:10:44.774210   73732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:10:44.786312   73732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:10:44.795892   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:44.926823   73732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:10:45.022945   73732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:10:45.023042   73732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:10:45.027389   73732 start.go:563] Will wait 60s for crictl version
	I1105 19:10:45.027451   73732 ssh_runner.go:195] Run: which crictl
	I1105 19:10:45.030701   73732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:10:45.067294   73732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:10:45.067410   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.094394   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.123459   73732 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:10:45.124645   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:45.127396   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.127794   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:45.127833   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.128104   73732 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 19:10:45.131923   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:45.143951   73732 kubeadm.go:883] updating cluster {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:10:45.144078   73732 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:10:45.144125   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:45.177770   73732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:10:45.177830   73732 ssh_runner.go:195] Run: which lz4
	I1105 19:10:45.181571   73732 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:10:45.186569   73732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:10:45.186602   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:10:46.442865   73732 crio.go:462] duration metric: took 1.26132812s to copy over tarball
	I1105 19:10:46.442959   73732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:10:44.962206   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting to get IP...
	I1105 19:10:44.963032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963397   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963492   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:44.963380   75165 retry.go:31] will retry after 274.297859ms: waiting for machine to come up
	I1105 19:10:45.239024   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239453   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.239406   75165 retry.go:31] will retry after 239.892312ms: waiting for machine to come up
	I1105 19:10:45.481036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481584   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.481569   75165 retry.go:31] will retry after 360.538082ms: waiting for machine to come up
	I1105 19:10:45.844144   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844565   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844596   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.844533   75165 retry.go:31] will retry after 387.597088ms: waiting for machine to come up
	I1105 19:10:46.234241   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234798   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.234738   75165 retry.go:31] will retry after 597.596298ms: waiting for machine to come up
	I1105 19:10:46.833721   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834170   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.834142   75165 retry.go:31] will retry after 688.240413ms: waiting for machine to come up
	I1105 19:10:47.523898   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524412   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524442   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:47.524377   75165 retry.go:31] will retry after 826.38207ms: waiting for machine to come up
	I1105 19:10:48.352258   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352787   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352809   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:48.352681   75165 retry.go:31] will retry after 1.381579847s: waiting for machine to come up
	I1105 19:10:48.547186   73732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104175993s)
	I1105 19:10:48.547221   73732 crio.go:469] duration metric: took 2.104326973s to extract the tarball
	I1105 19:10:48.547231   73732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:10:48.583027   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:48.630180   73732 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:10:48.630208   73732 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:10:48.630218   73732 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.31.2 crio true true} ...
	I1105 19:10:48.630349   73732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-271881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:10:48.630412   73732 ssh_runner.go:195] Run: crio config
	I1105 19:10:48.682182   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:48.682204   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:48.682213   73732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:10:48.682232   73732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-271881 NodeName:embed-certs-271881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:10:48.682354   73732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-271881"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:10:48.682412   73732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:10:48.691968   73732 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:10:48.692031   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:10:48.700980   73732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:10:48.716797   73732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:10:48.732408   73732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1105 19:10:48.748354   73732 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1105 19:10:48.751791   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:48.763068   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:48.893747   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:10:48.910247   73732 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881 for IP: 192.168.39.58
	I1105 19:10:48.910270   73732 certs.go:194] generating shared ca certs ...
	I1105 19:10:48.910303   73732 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:10:48.910488   73732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:10:48.910547   73732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:10:48.910561   73732 certs.go:256] generating profile certs ...
	I1105 19:10:48.910673   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/client.key
	I1105 19:10:48.910768   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key.0a454894
	I1105 19:10:48.910837   73732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key
	I1105 19:10:48.911021   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:10:48.911059   73732 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:10:48.911071   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:10:48.911116   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:10:48.911160   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:10:48.911196   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:10:48.911265   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:48.912104   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:10:48.969066   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:10:49.000713   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:10:49.040367   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:10:49.068456   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1105 19:10:49.094166   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:10:49.115986   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:10:49.137770   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:10:49.161140   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:10:49.182996   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:10:49.206578   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:10:49.230006   73732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:10:49.245835   73732 ssh_runner.go:195] Run: openssl version
	I1105 19:10:49.251252   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:10:49.261237   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265318   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265398   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.270753   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:10:49.280568   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:10:49.290580   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294567   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294644   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.299812   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:10:49.309398   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:10:49.319451   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323490   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323543   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.328708   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:10:49.338805   73732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:10:49.342918   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:10:49.348526   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:10:49.353943   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:10:49.359527   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:10:49.364886   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:10:49.370119   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:10:49.375437   73732 kubeadm.go:392] StartCluster: {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:10:49.375531   73732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:10:49.375572   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.415844   73732 cri.go:89] found id: ""
	I1105 19:10:49.415916   73732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:10:49.425336   73732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:10:49.425402   73732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:10:49.425474   73732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:10:49.434717   73732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:10:49.435831   73732 kubeconfig.go:125] found "embed-certs-271881" server: "https://192.168.39.58:8443"
	I1105 19:10:49.437903   73732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:10:49.446625   73732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I1105 19:10:49.446657   73732 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:10:49.446668   73732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:10:49.446732   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.479546   73732 cri.go:89] found id: ""
	I1105 19:10:49.479639   73732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:10:49.499034   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:10:49.510134   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:10:49.510159   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:10:49.510203   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:10:49.520482   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:10:49.520544   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:10:49.530750   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:10:49.539113   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:10:49.539183   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:10:49.548104   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.556754   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:10:49.556811   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.565606   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:10:49.574023   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:10:49.574091   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:10:49.582888   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:10:49.591876   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:49.688517   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.070191   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.38163928s)
	I1105 19:10:51.070240   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.267774   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.329051   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.406120   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:10:51.406226   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:51.907080   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:52.406468   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:49.735558   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735923   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735987   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:49.735914   75165 retry.go:31] will retry after 1.132319443s: waiting for machine to come up
	I1105 19:10:50.870267   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870770   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870801   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:50.870715   75165 retry.go:31] will retry after 1.791598796s: waiting for machine to come up
	I1105 19:10:52.664538   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665055   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:52.664912   75165 retry.go:31] will retry after 1.910294965s: waiting for machine to come up
	I1105 19:10:52.907103   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.407319   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.421763   73732 api_server.go:72] duration metric: took 2.015640262s to wait for apiserver process to appear ...
	I1105 19:10:53.421794   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:10:53.421816   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.752768   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.752803   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.752819   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.772365   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.772412   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.922705   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.928293   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:55.928329   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.422875   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.430633   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.430667   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.922156   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.934958   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.935016   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:57.422646   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:57.428784   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:10:57.435298   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:10:57.435319   73732 api_server.go:131] duration metric: took 4.013519207s to wait for apiserver health ...
	I1105 19:10:57.435327   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:57.435333   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:57.437061   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:10:57.438374   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:10:57.448509   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:10:57.465994   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:10:57.474649   73732 system_pods.go:59] 8 kube-system pods found
	I1105 19:10:57.474682   73732 system_pods.go:61] "coredns-7c65d6cfc9-nwzpq" [be8aa054-3f68-4c19-bae3-9d9cfcb51869] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:10:57.474691   73732 system_pods.go:61] "etcd-embed-certs-271881" [c37c829b-1dca-4659-b24c-4559304d9fe0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:10:57.474703   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [6df78e2a-1360-4c4b-b451-c96aa60f24ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:10:57.474710   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [95a6baca-c246-4043-acbc-235b076a89b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:10:57.474723   73732 system_pods.go:61] "kube-proxy-f945s" [2cb835f0-3727-4dd1-bd21-a21554ffdc0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 19:10:57.474730   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [53e044c5-199c-46f4-b3db-d3b65a8203aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:10:57.474741   73732 system_pods.go:61] "metrics-server-6867b74b74-vw2sm" [403d0c5f-d870-4f89-8caa-f5e9c8bf9ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:10:57.474748   73732 system_pods.go:61] "storage-provisioner" [13a89bf9-fb97-413a-9948-1c69780784cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 19:10:57.474758   73732 system_pods.go:74] duration metric: took 8.737357ms to wait for pod list to return data ...
	I1105 19:10:57.474769   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:10:57.480599   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:10:57.480623   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:10:57.480634   73732 node_conditions.go:105] duration metric: took 5.857622ms to run NodePressure ...
	I1105 19:10:57.480651   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:54.577390   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577939   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577969   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:54.577885   75165 retry.go:31] will retry after 3.393120773s: waiting for machine to come up
	I1105 19:10:57.971960   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972441   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:57.972370   75165 retry.go:31] will retry after 4.425954537s: waiting for machine to come up
	I1105 19:10:57.896717   73732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902115   73732 kubeadm.go:739] kubelet initialised
	I1105 19:10:57.902138   73732 kubeadm.go:740] duration metric: took 5.39576ms waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902152   73732 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:10:57.907293   73732 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:10:59.913946   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:02.414802   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:03.663928   74485 start.go:364] duration metric: took 3m10.909065205s to acquireMachinesLock for "old-k8s-version-567666"
	I1105 19:11:03.664023   74485 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:03.664038   74485 fix.go:54] fixHost starting: 
	I1105 19:11:03.664514   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:03.664569   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:03.682846   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I1105 19:11:03.683341   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:03.683786   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:11:03.683812   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:03.684219   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:03.684407   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:03.684552   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetState
	I1105 19:11:03.686262   74485 fix.go:112] recreateIfNeeded on old-k8s-version-567666: state=Stopped err=<nil>
	I1105 19:11:03.686295   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	W1105 19:11:03.686440   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:03.688047   74485 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-567666" ...
	I1105 19:11:02.401454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.401980   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Found IP for machine: 192.168.50.10
	I1105 19:11:02.402015   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has current primary IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.402025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserving static IP address...
	I1105 19:11:02.402384   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.402413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserved static IP address: 192.168.50.10
	I1105 19:11:02.402432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | skip adding static IP to network mk-default-k8s-diff-port-608095 - found existing host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"}
	I1105 19:11:02.402445   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for SSH to be available...
	I1105 19:11:02.402461   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Getting to WaitForSSH function...
	I1105 19:11:02.404454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404751   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.404778   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404915   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH client type: external
	I1105 19:11:02.404964   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa (-rw-------)
	I1105 19:11:02.405032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:02.405059   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | About to run SSH command:
	I1105 19:11:02.405072   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | exit 0
	I1105 19:11:02.526769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:02.527147   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetConfigRaw
	I1105 19:11:02.527756   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.530014   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530325   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.530357   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530527   74141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/config.json ...
	I1105 19:11:02.530708   74141 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:02.530728   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:02.530921   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.532868   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533184   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.533215   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533334   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.533493   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533630   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533761   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.533930   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.534116   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.534128   74141 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:02.631085   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:02.631114   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631351   74141 buildroot.go:166] provisioning hostname "default-k8s-diff-port-608095"
	I1105 19:11:02.631376   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631540   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.634037   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634371   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.634400   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634517   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.634691   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634849   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634995   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.635136   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.635310   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.635326   74141 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-608095 && echo "default-k8s-diff-port-608095" | sudo tee /etc/hostname
	I1105 19:11:02.744298   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-608095
	
	I1105 19:11:02.744327   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.747036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747348   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.747379   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747555   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.747716   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747846   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747940   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.748061   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.748266   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.748284   74141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-608095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-608095/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-608095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:02.850828   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:02.850854   74141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:02.850906   74141 buildroot.go:174] setting up certificates
	I1105 19:11:02.850923   74141 provision.go:84] configureAuth start
	I1105 19:11:02.850935   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.851260   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.853803   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854062   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.854088   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854203   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.856341   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856629   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.856659   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856747   74141 provision.go:143] copyHostCerts
	I1105 19:11:02.856804   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:02.856823   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:02.856874   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:02.856987   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:02.856997   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:02.857017   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:02.857075   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:02.857082   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:02.857100   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:02.857148   74141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-608095 san=[127.0.0.1 192.168.50.10 default-k8s-diff-port-608095 localhost minikube]
	I1105 19:11:03.048307   74141 provision.go:177] copyRemoteCerts
	I1105 19:11:03.048362   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:03.048386   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.050951   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051303   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.051353   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051556   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.051785   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.051953   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.052084   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.128441   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:03.150680   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1105 19:11:03.172480   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:03.194311   74141 provision.go:87] duration metric: took 343.374586ms to configureAuth
	I1105 19:11:03.194338   74141 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:03.194499   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:03.194560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.197209   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197585   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.197603   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197822   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.198006   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198168   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198336   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.198503   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.198686   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.198706   74141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:03.429895   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:03.429926   74141 machine.go:96] duration metric: took 899.201597ms to provisionDockerMachine
	I1105 19:11:03.429941   74141 start.go:293] postStartSetup for "default-k8s-diff-port-608095" (driver="kvm2")
	I1105 19:11:03.429955   74141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:03.429976   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.430329   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:03.430364   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.433455   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.433791   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.433820   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.434009   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.434323   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.434500   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.434659   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.514652   74141 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:03.518678   74141 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:03.518711   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:03.518774   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:03.518877   74141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:03.519014   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:03.528972   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:03.555892   74141 start.go:296] duration metric: took 125.936355ms for postStartSetup
	I1105 19:11:03.555939   74141 fix.go:56] duration metric: took 19.896481237s for fixHost
	I1105 19:11:03.555966   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.558764   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559153   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.559183   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559402   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.559610   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559788   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559933   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.560116   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.560292   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.560303   74141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:03.663723   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833863.637227261
	
	I1105 19:11:03.663751   74141 fix.go:216] guest clock: 1730833863.637227261
	I1105 19:11:03.663766   74141 fix.go:229] Guest: 2024-11-05 19:11:03.637227261 +0000 UTC Remote: 2024-11-05 19:11:03.555945261 +0000 UTC m=+239.048686257 (delta=81.282ms)
	I1105 19:11:03.663815   74141 fix.go:200] guest clock delta is within tolerance: 81.282ms
	I1105 19:11:03.663822   74141 start.go:83] releasing machines lock for "default-k8s-diff-port-608095", held for 20.004399519s
	I1105 19:11:03.663858   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.664158   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:03.666922   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667372   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.667408   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668101   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668297   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668412   74141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:03.668478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.668748   74141 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:03.668774   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.671463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671781   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.671810   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671903   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672175   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672333   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.672369   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.672417   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672578   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.672598   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672779   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.673106   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.777585   74141 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:03.783343   74141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:03.927951   74141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:03.933308   74141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:03.933380   74141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:03.948472   74141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:03.948499   74141 start.go:495] detecting cgroup driver to use...
	I1105 19:11:03.948572   74141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:03.963929   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:03.978578   74141 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:03.978643   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:03.992096   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:04.006036   74141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:04.114061   74141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:04.274136   74141 docker.go:233] disabling docker service ...
	I1105 19:11:04.274220   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:04.287806   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:04.300294   74141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:04.429899   74141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:04.576075   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:04.590934   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:04.611299   74141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:04.611375   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.623876   74141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:04.623949   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.634333   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.644768   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.654549   74141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:04.665001   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.675464   74141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.693845   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.703982   74141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:04.713758   74141 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:04.713820   74141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:04.727618   74141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:04.737679   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:04.866928   74141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:04.966529   74141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:04.966599   74141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:04.971536   74141 start.go:563] Will wait 60s for crictl version
	I1105 19:11:04.971602   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:11:04.975344   74141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:05.015910   74141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:05.015987   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.043577   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.072767   74141 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:03.689374   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .Start
	I1105 19:11:03.689560   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring networks are active...
	I1105 19:11:03.690290   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network default is active
	I1105 19:11:03.690659   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network mk-old-k8s-version-567666 is active
	I1105 19:11:03.691130   74485 main.go:141] libmachine: (old-k8s-version-567666) Getting domain xml...
	I1105 19:11:03.691890   74485 main.go:141] libmachine: (old-k8s-version-567666) Creating domain...
	I1105 19:11:05.006949   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting to get IP...
	I1105 19:11:05.008062   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.008547   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.008605   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.008523   75309 retry.go:31] will retry after 290.124771ms: waiting for machine to come up
	I1105 19:11:05.300185   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.300768   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.300803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.300717   75309 retry.go:31] will retry after 292.829683ms: waiting for machine to come up
	I1105 19:11:05.595365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.595881   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.595907   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.595831   75309 retry.go:31] will retry after 447.168257ms: waiting for machine to come up
	I1105 19:11:06.045320   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.045946   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.045976   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.045893   75309 retry.go:31] will retry after 420.272812ms: waiting for machine to come up
	I1105 19:11:06.467556   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.468012   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.468039   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.467962   75309 retry.go:31] will retry after 657.733497ms: waiting for machine to come up
	I1105 19:11:07.128022   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:07.128531   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:07.128559   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:07.128484   75309 retry.go:31] will retry after 922.664226ms: waiting for machine to come up
	I1105 19:11:04.416533   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:06.915445   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:07.417579   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:07.417610   73732 pod_ready.go:82] duration metric: took 9.510292246s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:07.417620   73732 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:05.073913   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:05.077086   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077430   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:05.077468   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077691   74141 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:05.081724   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:05.093668   74141 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:05.093785   74141 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:05.093853   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:05.128693   74141 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:05.128753   74141 ssh_runner.go:195] Run: which lz4
	I1105 19:11:05.133116   74141 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:05.137101   74141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:05.137126   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:11:06.379012   74141 crio.go:462] duration metric: took 1.245926141s to copy over tarball
	I1105 19:11:06.379088   74141 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:08.545369   74141 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.166238549s)
	I1105 19:11:08.545405   74141 crio.go:469] duration metric: took 2.166364449s to extract the tarball
	I1105 19:11:08.545422   74141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:08.581651   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:08.628768   74141 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:11:08.628795   74141 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:11:08.628805   74141 kubeadm.go:934] updating node { 192.168.50.10 8444 v1.31.2 crio true true} ...
	I1105 19:11:08.628937   74141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-608095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:08.629056   74141 ssh_runner.go:195] Run: crio config
	I1105 19:11:08.690112   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:08.690140   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:08.690152   74141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:08.690184   74141 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-608095 NodeName:default-k8s-diff-port-608095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:08.690346   74141 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-608095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:08.690415   74141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:08.700222   74141 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:08.700294   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:08.709542   74141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1105 19:11:08.725723   74141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:08.741985   74141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1105 19:11:08.758655   74141 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:08.762296   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:08.774119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:08.910000   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:08.926765   74141 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095 for IP: 192.168.50.10
	I1105 19:11:08.926788   74141 certs.go:194] generating shared ca certs ...
	I1105 19:11:08.926806   74141 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:08.927006   74141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:08.927069   74141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:08.927080   74141 certs.go:256] generating profile certs ...
	I1105 19:11:08.927157   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/client.key
	I1105 19:11:08.927229   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key.f2b96156
	I1105 19:11:08.927281   74141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key
	I1105 19:11:08.927456   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:08.927506   74141 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:08.927516   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:08.927549   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:08.927585   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:08.927620   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:08.927682   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:08.928417   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:08.971359   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:09.011632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:09.049748   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:09.078632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 19:11:09.105786   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:09.127855   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:09.151461   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:11:09.174068   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:09.196733   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:09.219111   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:09.241335   74141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:09.257040   74141 ssh_runner.go:195] Run: openssl version
	I1105 19:11:09.262371   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:09.272232   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276300   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276362   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.281747   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:09.291864   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:09.302012   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306085   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306142   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.311374   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:09.321334   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:09.331208   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335401   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335451   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.340595   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:09.350430   74141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:09.354622   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:09.360165   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:09.365624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:09.371545   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:09.377226   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:09.382624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:09.387929   74141 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:09.388032   74141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:09.388076   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.429707   74141 cri.go:89] found id: ""
	I1105 19:11:09.429783   74141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:09.440455   74141 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:09.440476   74141 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:09.440527   74141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:09.451745   74141 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:09.452609   74141 kubeconfig.go:125] found "default-k8s-diff-port-608095" server: "https://192.168.50.10:8444"
	I1105 19:11:09.454539   74141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:09.463900   74141 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.10
	I1105 19:11:09.463926   74141 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:09.463936   74141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:09.463987   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.497583   74141 cri.go:89] found id: ""
	I1105 19:11:09.497656   74141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:09.513767   74141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:09.523219   74141 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:09.523237   74141 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:09.523284   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1105 19:11:09.533116   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:09.533181   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:09.542453   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1105 19:11:08.053120   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:08.053610   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:08.053636   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:08.053587   75309 retry.go:31] will retry after 947.415519ms: waiting for machine to come up
	I1105 19:11:09.002803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:09.003423   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:09.003452   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:09.003363   75309 retry.go:31] will retry after 1.07978111s: waiting for machine to come up
	I1105 19:11:10.084404   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:10.084808   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:10.084830   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:10.084784   75309 retry.go:31] will retry after 1.482510322s: waiting for machine to come up
	I1105 19:11:11.568421   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:11.568840   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:11.568869   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:11.568791   75309 retry.go:31] will retry after 1.630983434s: waiting for machine to come up
	I1105 19:11:08.426308   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.426337   73732 pod_ready.go:82] duration metric: took 1.008708779s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.426350   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432238   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.432264   73732 pod_ready.go:82] duration metric: took 5.905051ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432276   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438187   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.438214   73732 pod_ready.go:82] duration metric: took 5.9294ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438226   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443794   73732 pod_ready.go:93] pod "kube-proxy-f945s" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.443823   73732 pod_ready.go:82] duration metric: took 5.587862ms for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443835   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:10.449498   73732 pod_ready.go:103] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:12.454934   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:12.454965   73732 pod_ready.go:82] duration metric: took 4.011121022s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:12.455003   73732 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:09.551174   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:09.551235   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:09.560481   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.571928   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:09.571997   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.583935   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1105 19:11:09.595336   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:09.595401   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:09.605061   74141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:09.613920   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:09.718759   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.680100   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.901034   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.951868   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.997866   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:10.997956   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.498113   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.998192   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.498517   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.998919   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:13.013078   74141 api_server.go:72] duration metric: took 2.01520799s to wait for apiserver process to appear ...
	I1105 19:11:13.013106   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:11:13.013136   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.042333   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.042388   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.042404   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.085574   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.085602   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.513733   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.518755   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:16.518789   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.013278   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.019214   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:17.019236   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.513886   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.519036   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:11:17.528970   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:11:17.529000   74141 api_server.go:131] duration metric: took 4.515887773s to wait for apiserver health ...
	I1105 19:11:17.529009   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:17.529016   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:17.530429   74141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:11:13.201891   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:13.202425   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:13.202453   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:13.202387   75309 retry.go:31] will retry after 2.689744765s: waiting for machine to come up
	I1105 19:11:15.893632   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:15.893989   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:15.894034   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:15.893964   75309 retry.go:31] will retry after 2.460566804s: waiting for machine to come up
	I1105 19:11:14.465748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:16.961287   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:17.531600   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:11:17.544876   74141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:11:17.567835   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:11:17.583925   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:11:17.583976   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:11:17.583988   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:11:17.583999   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:11:17.584015   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:11:17.584027   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:11:17.584041   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:11:17.584052   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:11:17.584060   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:11:17.584068   74141 system_pods.go:74] duration metric: took 16.206948ms to wait for pod list to return data ...
	I1105 19:11:17.584081   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:11:17.593935   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:11:17.593960   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:11:17.593971   74141 node_conditions.go:105] duration metric: took 9.883295ms to run NodePressure ...
	I1105 19:11:17.593988   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:17.929181   74141 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933853   74141 kubeadm.go:739] kubelet initialised
	I1105 19:11:17.933879   74141 kubeadm.go:740] duration metric: took 4.667992ms waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933888   74141 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:17.940560   74141 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.952799   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952832   74141 pod_ready.go:82] duration metric: took 12.240861ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.952845   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952856   74141 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.959079   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959105   74141 pod_ready.go:82] duration metric: took 6.23649ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.959119   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959130   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.963797   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963817   74141 pod_ready.go:82] duration metric: took 4.681011ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.963830   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963837   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.970915   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970935   74141 pod_ready.go:82] duration metric: took 7.091116ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.970945   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970951   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.371478   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371503   74141 pod_ready.go:82] duration metric: took 400.5454ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.371512   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371519   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.771731   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771768   74141 pod_ready.go:82] duration metric: took 400.239012ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.771783   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771792   74141 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:19.171239   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171271   74141 pod_ready.go:82] duration metric: took 399.46983ms for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:19.171286   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171296   74141 pod_ready.go:39] duration metric: took 1.237397637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:19.171315   74141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:11:19.185845   74141 ops.go:34] apiserver oom_adj: -16
	I1105 19:11:19.185869   74141 kubeadm.go:597] duration metric: took 9.745385943s to restartPrimaryControlPlane
	I1105 19:11:19.185880   74141 kubeadm.go:394] duration metric: took 9.797958845s to StartCluster
	I1105 19:11:19.185901   74141 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.185989   74141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:19.187722   74141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.187971   74141 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:11:19.188036   74141 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:11:19.188142   74141 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188160   74141 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-608095"
	I1105 19:11:19.188159   74141 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-608095"
	W1105 19:11:19.188171   74141 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:11:19.188199   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188236   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:19.188248   74141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-608095"
	I1105 19:11:19.188273   74141 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188310   74141 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.188323   74141 addons.go:243] addon metrics-server should already be in state true
	I1105 19:11:19.188379   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188526   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188569   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188674   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188725   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188802   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188823   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.189792   74141 out.go:177] * Verifying Kubernetes components...
	I1105 19:11:19.191119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:19.203875   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I1105 19:11:19.204313   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.204803   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.204830   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.205083   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I1105 19:11:19.205175   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.205432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.205488   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.205973   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.205999   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.206357   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.206916   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.206955   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.207292   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I1105 19:11:19.207671   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.208122   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.208146   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.208484   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.208861   74141 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.208882   74141 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:11:19.208909   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.209004   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209045   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.209234   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209273   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.223963   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I1105 19:11:19.224405   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.225044   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.225074   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.225460   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.226141   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I1105 19:11:19.226463   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.226509   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.226577   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.226757   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I1105 19:11:19.227058   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.227081   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.227475   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.227558   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.227797   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.228116   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.228136   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.228530   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.228755   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.229870   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.230471   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.232239   74141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:19.232263   74141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:11:19.233508   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:11:19.233527   74141 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:11:19.233548   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.233607   74141 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.233626   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:11:19.233647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.237337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237365   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237895   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237928   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237958   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237972   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.238155   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238270   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238440   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238623   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238681   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.239040   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.243685   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1105 19:11:19.244073   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.244584   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.244602   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.244951   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.245112   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.246617   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.246814   74141 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.246830   74141 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:11:19.246845   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.249467   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.249896   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.249925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.250139   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.250317   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.250466   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.250636   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.396917   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:19.412224   74141 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:19.541493   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.566934   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:11:19.566982   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:11:19.567627   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.607685   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:11:19.607717   74141 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:11:19.640921   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:19.640959   74141 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:11:19.674550   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:20.091222   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091248   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091528   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091583   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091596   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091605   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091807   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091868   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091853   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.105073   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.105093   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.105426   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.105442   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719139   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.151476995s)
	I1105 19:11:20.719187   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719194   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.044605505s)
	I1105 19:11:20.719236   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719256   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719511   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719582   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719593   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719596   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719631   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719580   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719643   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719654   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719670   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719680   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719897   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719946   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719948   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719903   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719982   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719990   74141 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-608095"
	I1105 19:11:20.719927   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.721843   74141 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1105 19:11:22.583507   73496 start.go:364] duration metric: took 54.335724939s to acquireMachinesLock for "no-preload-459223"
	I1105 19:11:22.583581   73496 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:22.583590   73496 fix.go:54] fixHost starting: 
	I1105 19:11:22.584018   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:22.584054   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:22.603921   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I1105 19:11:22.604367   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:22.604825   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:11:22.604845   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:22.605233   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:22.605408   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:22.605534   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:11:22.607289   73496 fix.go:112] recreateIfNeeded on no-preload-459223: state=Stopped err=<nil>
	I1105 19:11:22.607314   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	W1105 19:11:22.607458   73496 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:22.609455   73496 out.go:177] * Restarting existing kvm2 VM for "no-preload-459223" ...
	I1105 19:11:18.357643   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:18.358065   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:18.358099   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:18.358009   75309 retry.go:31] will retry after 3.036834524s: waiting for machine to come up
	I1105 19:11:21.398221   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398763   74485 main.go:141] libmachine: (old-k8s-version-567666) Found IP for machine: 192.168.61.125
	I1105 19:11:21.398825   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has current primary IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398843   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserving static IP address...
	I1105 19:11:21.399327   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.399350   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserved static IP address: 192.168.61.125
	I1105 19:11:21.399365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | skip adding static IP to network mk-old-k8s-version-567666 - found existing host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"}
	I1105 19:11:21.399379   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Getting to WaitForSSH function...
	I1105 19:11:21.399394   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting for SSH to be available...
	I1105 19:11:21.401270   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401664   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.401691   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401866   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH client type: external
	I1105 19:11:21.401897   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa (-rw-------)
	I1105 19:11:21.401935   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:21.401949   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | About to run SSH command:
	I1105 19:11:21.401959   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | exit 0
	I1105 19:11:21.527815   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:21.528165   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:11:21.528874   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.531373   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531647   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.531672   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531876   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:11:21.532071   74485 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:21.532092   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:21.532332   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.534177   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534431   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.534465   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534556   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.534716   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534845   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534960   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.535142   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.535329   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.535341   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:21.643321   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:21.643354   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643618   74485 buildroot.go:166] provisioning hostname "old-k8s-version-567666"
	I1105 19:11:21.643646   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643812   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.646230   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646628   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.646666   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.647037   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647167   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647290   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.647421   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.647579   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.647592   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-567666 && echo "old-k8s-version-567666" | sudo tee /etc/hostname
	I1105 19:11:21.770209   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-567666
	
	I1105 19:11:21.770255   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.772932   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773314   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.773346   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773484   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.773691   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773950   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.774121   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.774357   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.774386   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-567666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-567666/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-567666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:21.890834   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:21.890860   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:21.890915   74485 buildroot.go:174] setting up certificates
	I1105 19:11:21.890929   74485 provision.go:84] configureAuth start
	I1105 19:11:21.890944   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.891224   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.893835   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894256   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.894285   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.896436   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896699   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.896715   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896893   74485 provision.go:143] copyHostCerts
	I1105 19:11:21.896951   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:21.896967   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:21.897037   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:21.897163   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:21.897176   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:21.897205   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:21.897279   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:21.897289   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:21.897315   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:21.897396   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-567666 san=[127.0.0.1 192.168.61.125 localhost minikube old-k8s-version-567666]
	I1105 19:11:21.962153   74485 provision.go:177] copyRemoteCerts
	I1105 19:11:21.962219   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:21.962257   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.964765   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965125   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.965166   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965330   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.965478   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.965603   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.965746   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.048519   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:22.072975   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1105 19:11:22.098263   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:22.120258   74485 provision.go:87] duration metric: took 229.316972ms to configureAuth
	I1105 19:11:22.120285   74485 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:22.120444   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:11:22.120516   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.123859   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124309   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.124344   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124536   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.124737   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.124922   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.125055   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.125213   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.125375   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.125388   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:22.349922   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:22.349964   74485 machine.go:96] duration metric: took 817.87332ms to provisionDockerMachine
	I1105 19:11:22.349979   74485 start.go:293] postStartSetup for "old-k8s-version-567666" (driver="kvm2")
	I1105 19:11:22.349992   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:22.350014   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.350350   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:22.350385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.352922   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353310   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.353332   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353459   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.353638   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.353807   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.353921   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.437482   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:22.441617   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:22.441646   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:22.441711   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:22.441807   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:22.441929   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:22.451016   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:22.474199   74485 start.go:296] duration metric: took 124.207336ms for postStartSetup
	I1105 19:11:22.474233   74485 fix.go:56] duration metric: took 18.810197154s for fixHost
	I1105 19:11:22.474269   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.476786   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477119   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.477157   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477279   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.477471   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477621   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477753   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.477910   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.478070   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.478081   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:22.583343   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833882.558222038
	
	I1105 19:11:22.583363   74485 fix.go:216] guest clock: 1730833882.558222038
	I1105 19:11:22.583372   74485 fix.go:229] Guest: 2024-11-05 19:11:22.558222038 +0000 UTC Remote: 2024-11-05 19:11:22.474236871 +0000 UTC m=+209.862783450 (delta=83.985167ms)
	I1105 19:11:22.583418   74485 fix.go:200] guest clock delta is within tolerance: 83.985167ms
	I1105 19:11:22.583429   74485 start.go:83] releasing machines lock for "old-k8s-version-567666", held for 18.919444623s
	I1105 19:11:22.583460   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.583717   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:22.586183   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586479   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.586509   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586687   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587137   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587310   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587400   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:22.587448   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.587521   74485 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:22.587548   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.590145   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590474   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.590507   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590530   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590655   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.590831   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.590995   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.591010   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591037   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.591179   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.591286   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.591438   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.591558   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591702   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:19.461723   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:21.962582   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:22.702707   74485 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:22.708965   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:22.856764   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:22.863791   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:22.863866   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:22.883997   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:22.884022   74485 start.go:495] detecting cgroup driver to use...
	I1105 19:11:22.884094   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:22.901499   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:22.919358   74485 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:22.919422   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:22.936964   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:22.953538   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:23.077720   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:23.218316   74485 docker.go:233] disabling docker service ...
	I1105 19:11:23.218390   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:23.238316   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:23.251814   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:23.427386   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:23.552928   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:23.567149   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:23.587241   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1105 19:11:23.587307   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.597558   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:23.597620   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.607466   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.616794   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.626425   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:23.637121   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:23.649243   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:23.649305   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:23.664648   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:23.675060   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:23.812636   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:23.903326   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:23.903404   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:23.908377   74485 start.go:563] Will wait 60s for crictl version
	I1105 19:11:23.908434   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:23.912163   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:23.961712   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:23.961794   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:23.992951   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:24.032041   74485 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1105 19:11:20.723316   74141 addons.go:510] duration metric: took 1.53528546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1105 19:11:21.416385   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:23.416458   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:22.610737   73496 main.go:141] libmachine: (no-preload-459223) Calling .Start
	I1105 19:11:22.610910   73496 main.go:141] libmachine: (no-preload-459223) Ensuring networks are active...
	I1105 19:11:22.611680   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network default is active
	I1105 19:11:22.612057   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network mk-no-preload-459223 is active
	I1105 19:11:22.612426   73496 main.go:141] libmachine: (no-preload-459223) Getting domain xml...
	I1105 19:11:22.613081   73496 main.go:141] libmachine: (no-preload-459223) Creating domain...
	I1105 19:11:24.013821   73496 main.go:141] libmachine: (no-preload-459223) Waiting to get IP...
	I1105 19:11:24.014922   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.015467   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.015561   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.015439   75501 retry.go:31] will retry after 233.461829ms: waiting for machine to come up
	I1105 19:11:24.251339   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.252673   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.252799   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.252760   75501 retry.go:31] will retry after 276.401207ms: waiting for machine to come up
	I1105 19:11:24.531408   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.531964   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.531987   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.531909   75501 retry.go:31] will retry after 367.69826ms: waiting for machine to come up
	I1105 19:11:24.901179   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.901579   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.901608   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.901536   75501 retry.go:31] will retry after 602.654501ms: waiting for machine to come up
	I1105 19:11:25.505889   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:25.506403   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:25.506426   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:25.506364   75501 retry.go:31] will retry after 492.077165ms: waiting for machine to come up
	I1105 19:11:24.033400   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:24.036549   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037128   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:24.037165   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037346   74485 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:24.042641   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:24.055174   74485 kubeadm.go:883] updating cluster {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:24.055327   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:11:24.055388   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:24.101655   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:24.101724   74485 ssh_runner.go:195] Run: which lz4
	I1105 19:11:24.105618   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:24.109705   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:24.109735   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1105 19:11:25.602158   74485 crio.go:462] duration metric: took 1.496564307s to copy over tarball
	I1105 19:11:25.602236   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:23.963218   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:26.461963   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:25.419351   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:26.916693   74141 node_ready.go:49] node "default-k8s-diff-port-608095" has status "Ready":"True"
	I1105 19:11:26.916731   74141 node_ready.go:38] duration metric: took 7.50447744s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:26.916744   74141 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:26.922179   74141 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927845   74141 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.927879   74141 pod_ready.go:82] duration metric: took 5.666725ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927892   74141 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932723   74141 pod_ready.go:93] pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.932752   74141 pod_ready.go:82] duration metric: took 4.843531ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932761   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937108   74141 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.937137   74141 pod_ready.go:82] duration metric: took 4.368536ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937152   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.941970   74141 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.941995   74141 pod_ready.go:82] duration metric: took 4.833418ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.942008   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317480   74141 pod_ready.go:93] pod "kube-proxy-8v42c" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.317505   74141 pod_ready.go:82] duration metric: took 375.489077ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317517   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717923   74141 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.717945   74141 pod_ready.go:82] duration metric: took 400.42059ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717956   74141 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.000041   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.000558   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.000613   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.000525   75501 retry.go:31] will retry after 920.198126ms: waiting for machine to come up
	I1105 19:11:26.922134   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.922917   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.922951   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.922858   75501 retry.go:31] will retry after 1.071853506s: waiting for machine to come up
	I1105 19:11:27.996574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:27.996995   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:27.997020   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:27.996949   75501 retry.go:31] will retry after 1.283200825s: waiting for machine to come up
	I1105 19:11:29.282457   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:29.282942   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:29.282979   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:29.282903   75501 retry.go:31] will retry after 1.512809658s: waiting for machine to come up
	I1105 19:11:28.701223   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.098952901s)
	I1105 19:11:28.701253   74485 crio.go:469] duration metric: took 3.099065633s to extract the tarball
	I1105 19:11:28.701263   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:28.744214   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:28.778845   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:28.778868   74485 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:28.778962   74485 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:28.778945   74485 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.779024   74485 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.779039   74485 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.778939   74485 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.779067   74485 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.779083   74485 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.778957   74485 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781024   74485 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781003   74485 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.781052   74485 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.781002   74485 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.781088   74485 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.781114   74485 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.013637   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1105 19:11:29.043928   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.043936   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.044140   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.045892   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.046313   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.055792   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.081724   74485 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1105 19:11:29.081779   74485 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1105 19:11:29.081826   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.234925   74485 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1105 19:11:29.234966   74485 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.235046   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235079   74485 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1105 19:11:29.235112   74485 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.235136   74485 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1105 19:11:29.235152   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235167   74485 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.235200   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235238   74485 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1105 19:11:29.235277   74485 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.235298   74485 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1105 19:11:29.235320   74485 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.235333   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235352   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235351   74485 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1105 19:11:29.235385   74485 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.235415   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235426   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.251873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.251960   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.251985   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.252000   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.371298   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.415548   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.415592   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.415654   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.415710   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.415791   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.415868   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.466873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.544593   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.544660   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.586695   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.586714   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.586812   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.586916   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.606582   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1105 19:11:29.707767   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1105 19:11:29.707803   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1105 19:11:29.716195   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1105 19:11:29.723097   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1105 19:11:30.039971   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:30.182760   74485 cache_images.go:92] duration metric: took 1.403874987s to LoadCachedImages
	W1105 19:11:30.182890   74485 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1105 19:11:30.182912   74485 kubeadm.go:934] updating node { 192.168.61.125 8443 v1.20.0 crio true true} ...
	I1105 19:11:30.183052   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-567666 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:30.183146   74485 ssh_runner.go:195] Run: crio config
	I1105 19:11:30.235206   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:11:30.235241   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:30.235253   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:30.235277   74485 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-567666 NodeName:old-k8s-version-567666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1105 19:11:30.235433   74485 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-567666"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:30.235503   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1105 19:11:30.245189   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:30.245263   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:30.254772   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1105 19:11:30.271711   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:30.288568   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1105 19:11:30.309098   74485 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:30.313211   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:30.325637   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:30.447346   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:30.466863   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666 for IP: 192.168.61.125
	I1105 19:11:30.466884   74485 certs.go:194] generating shared ca certs ...
	I1105 19:11:30.466898   74485 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:30.467086   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:30.467152   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:30.467165   74485 certs.go:256] generating profile certs ...
	I1105 19:11:30.467322   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key
	I1105 19:11:30.467398   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8
	I1105 19:11:30.467448   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key
	I1105 19:11:30.467614   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:30.467656   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:30.467676   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:30.467722   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:30.467759   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:30.467788   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:30.467847   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:30.468756   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:30.532325   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:30.559936   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:30.592995   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:30.632421   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 19:11:30.662285   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:11:30.696292   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:30.725642   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:30.750231   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:30.773213   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:30.796269   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:30.820261   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:30.837059   74485 ssh_runner.go:195] Run: openssl version
	I1105 19:11:30.842937   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:30.855033   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859637   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859720   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.865747   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:30.877678   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:30.890762   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895576   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895642   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.901686   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:30.912689   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:30.923800   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928911   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928984   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.934782   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:30.947059   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:30.951934   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:30.958065   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:30.965341   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:30.971725   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:30.977606   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:30.983486   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:30.989212   74485 kubeadm.go:392] StartCluster: {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:30.989350   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:30.989411   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.031794   74485 cri.go:89] found id: ""
	I1105 19:11:31.031884   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:31.043178   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:31.043202   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:31.043291   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:31.054102   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:31.055256   74485 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:31.055924   74485 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-567666" cluster setting kubeconfig missing "old-k8s-version-567666" context setting]
	I1105 19:11:31.056913   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:31.064220   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:31.074582   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.125
	I1105 19:11:31.074618   74485 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:31.074628   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:31.074706   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.111157   74485 cri.go:89] found id: ""
	I1105 19:11:31.111241   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:31.130027   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:31.139917   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:31.139939   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:31.140007   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:31.150790   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:31.150868   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:31.161397   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:31.170394   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:31.170462   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:31.179594   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.188892   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:31.188952   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.199840   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:31.209166   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:31.209244   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:31.219687   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:31.231079   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:31.350667   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.094565   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.334807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.457538   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.534503   74485 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:32.534596   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:28.464017   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.962422   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:29.725325   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:32.225372   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.796963   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:30.797438   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:30.797489   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:30.797407   75501 retry.go:31] will retry after 1.774832047s: waiting for machine to come up
	I1105 19:11:32.574423   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:32.575000   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:32.575047   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:32.574929   75501 retry.go:31] will retry after 2.041093372s: waiting for machine to come up
	I1105 19:11:34.618469   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:34.618954   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:34.619015   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:34.618915   75501 retry.go:31] will retry after 2.731949113s: waiting for machine to come up
	I1105 19:11:33.034690   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:33.535594   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.035526   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.534836   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.034947   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.535108   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.035417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.535438   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.034766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.535415   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:32.962469   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.963093   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.461010   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.724484   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.224511   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.352209   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:37.352752   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:37.352783   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:37.352686   75501 retry.go:31] will retry after 3.62202055s: waiting for machine to come up
	I1105 19:11:38.035553   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:38.534702   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.035332   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.534749   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.034989   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.535354   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.035624   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.534847   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.035293   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.535363   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.465635   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:41.961348   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:40.978791   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979231   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has current primary IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979249   73496 main.go:141] libmachine: (no-preload-459223) Found IP for machine: 192.168.72.101
	I1105 19:11:40.979258   73496 main.go:141] libmachine: (no-preload-459223) Reserving static IP address...
	I1105 19:11:40.979621   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.979650   73496 main.go:141] libmachine: (no-preload-459223) Reserved static IP address: 192.168.72.101
	I1105 19:11:40.979669   73496 main.go:141] libmachine: (no-preload-459223) DBG | skip adding static IP to network mk-no-preload-459223 - found existing host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"}
	I1105 19:11:40.979682   73496 main.go:141] libmachine: (no-preload-459223) Waiting for SSH to be available...
	I1105 19:11:40.979710   73496 main.go:141] libmachine: (no-preload-459223) DBG | Getting to WaitForSSH function...
	I1105 19:11:40.981725   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.982063   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982202   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH client type: external
	I1105 19:11:40.982227   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa (-rw-------)
	I1105 19:11:40.982258   73496 main.go:141] libmachine: (no-preload-459223) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:40.982286   73496 main.go:141] libmachine: (no-preload-459223) DBG | About to run SSH command:
	I1105 19:11:40.982310   73496 main.go:141] libmachine: (no-preload-459223) DBG | exit 0
	I1105 19:11:41.111259   73496 main.go:141] libmachine: (no-preload-459223) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:41.111639   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetConfigRaw
	I1105 19:11:41.112368   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.114811   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115215   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.115244   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115499   73496 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/config.json ...
	I1105 19:11:41.115687   73496 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:41.115705   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:41.115900   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.118059   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118481   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.118505   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118659   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.118833   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.118959   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.119078   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.119222   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.119426   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.119442   73496 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:41.235030   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:41.235060   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235270   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:11:41.235294   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235480   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.237980   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238288   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.238327   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238405   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.238567   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238687   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238805   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.238938   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.239150   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.239163   73496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-459223 && echo "no-preload-459223" | sudo tee /etc/hostname
	I1105 19:11:41.366664   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-459223
	
	I1105 19:11:41.366693   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.369672   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.369979   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.370006   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.370147   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.370335   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370661   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.370830   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.371067   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.371086   73496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-459223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-459223/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-459223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:41.495741   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:41.495774   73496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:41.495796   73496 buildroot.go:174] setting up certificates
	I1105 19:11:41.495804   73496 provision.go:84] configureAuth start
	I1105 19:11:41.495816   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.496076   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.498948   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499377   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.499409   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499552   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.501842   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502168   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.502198   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502367   73496 provision.go:143] copyHostCerts
	I1105 19:11:41.502428   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:41.502445   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:41.502516   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:41.502662   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:41.502674   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:41.502706   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:41.502814   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:41.502825   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:41.502853   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:41.502934   73496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.no-preload-459223 san=[127.0.0.1 192.168.72.101 localhost minikube no-preload-459223]
	I1105 19:11:41.648058   73496 provision.go:177] copyRemoteCerts
	I1105 19:11:41.648115   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:41.648137   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.650915   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651274   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.651306   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.651707   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.651878   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.652032   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:41.736549   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1105 19:11:41.759352   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:41.782205   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:41.804725   73496 provision.go:87] duration metric: took 308.906806ms to configureAuth
	I1105 19:11:41.804755   73496 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:41.804930   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:41.805011   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.807634   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.808071   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.808498   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808657   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808792   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.808960   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.809113   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.809125   73496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:42.033406   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:42.033449   73496 machine.go:96] duration metric: took 917.749182ms to provisionDockerMachine
	I1105 19:11:42.033462   73496 start.go:293] postStartSetup for "no-preload-459223" (driver="kvm2")
	I1105 19:11:42.033475   73496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:42.033506   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.033853   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:42.033883   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.037259   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037688   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.037722   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037869   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.038063   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.038231   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.038361   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.126624   73496 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:42.130761   73496 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:42.130794   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:42.130881   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:42.131006   73496 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:42.131120   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:42.140978   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:42.163880   73496 start.go:296] duration metric: took 130.405487ms for postStartSetup
	I1105 19:11:42.163933   73496 fix.go:56] duration metric: took 19.580327925s for fixHost
	I1105 19:11:42.163953   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.166648   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.166994   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.167025   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.167196   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.167394   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167565   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167705   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.167856   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:42.168016   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:42.168025   73496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:42.279303   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833902.251467447
	
	I1105 19:11:42.279336   73496 fix.go:216] guest clock: 1730833902.251467447
	I1105 19:11:42.279351   73496 fix.go:229] Guest: 2024-11-05 19:11:42.251467447 +0000 UTC Remote: 2024-11-05 19:11:42.163937292 +0000 UTC m=+356.505256250 (delta=87.530155ms)
	I1105 19:11:42.279378   73496 fix.go:200] guest clock delta is within tolerance: 87.530155ms
	I1105 19:11:42.279387   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 19.695831159s
	I1105 19:11:42.279417   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.279660   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:42.282462   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.282828   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.282871   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.283018   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283439   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283580   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283669   73496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:42.283716   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.283811   73496 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:42.283838   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.286528   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286754   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286891   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.286917   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287097   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.287112   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287124   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287313   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287495   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287510   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287666   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287664   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.287769   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.398511   73496 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:42.404337   73496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:42.550196   73496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:42.555775   73496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:42.555853   73496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:42.571003   73496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:42.571031   73496 start.go:495] detecting cgroup driver to use...
	I1105 19:11:42.571123   73496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:42.586390   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:42.599887   73496 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:42.599944   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:42.613260   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:42.626371   73496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:42.736949   73496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:42.898897   73496 docker.go:233] disabling docker service ...
	I1105 19:11:42.898965   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:42.912534   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:42.925075   73496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:43.043425   73496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:43.175468   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:43.190803   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:43.210413   73496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:43.210496   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.221971   73496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:43.222064   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.232251   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.241540   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.251131   73496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:43.261218   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.270932   73496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.287905   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.297730   73496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:43.307263   73496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:43.307319   73496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:43.319421   73496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:43.328415   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:43.445798   73496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:43.532190   73496 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:43.532284   73496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:43.536931   73496 start.go:563] Will wait 60s for crictl version
	I1105 19:11:43.536986   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.540525   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:43.576428   73496 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:43.576540   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.603034   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.631229   73496 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:39.724162   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:42.224141   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:44.224609   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:43.632482   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:43.634912   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635227   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:43.635260   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635530   73496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:43.639287   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:43.650818   73496 kubeadm.go:883] updating cluster {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:43.650963   73496 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:43.651042   73496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:43.685392   73496 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:43.685421   73496 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:43.685492   73496 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.685500   73496 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.685517   73496 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.685547   73496 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.685506   73496 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.685569   73496 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.685558   73496 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.685623   73496 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.686958   73496 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.686979   73496 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.686976   73496 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.687017   73496 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.687030   73496 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.687057   73496 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1105 19:11:43.898928   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.914069   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1105 19:11:43.934388   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.940664   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.947392   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.951614   73496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1105 19:11:43.951652   73496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.951686   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.957000   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.045057   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.075256   73496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1105 19:11:44.075289   73496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1105 19:11:44.075304   73496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.075310   73496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075357   73496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1105 19:11:44.075388   73496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075417   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.075481   73496 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1105 19:11:44.075431   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075511   73496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.075543   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.102803   73496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1105 19:11:44.102856   73496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.102916   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.133582   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.133640   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.133655   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.133707   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.188042   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.188058   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.272464   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.272500   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.272467   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.272531   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.289003   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.289126   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.411162   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1105 19:11:44.411248   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.411307   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1105 19:11:44.411326   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:44.411361   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1105 19:11:44.411394   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:44.411432   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478064   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1105 19:11:44.478093   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478132   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1105 19:11:44.478152   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478178   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1105 19:11:44.478195   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1105 19:11:44.478211   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1105 19:11:44.478226   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:44.478249   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1105 19:11:44.478257   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:44.478324   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:44.889847   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.035199   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.534769   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.035551   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.535664   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.035103   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.535581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.035077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.535660   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.035462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.534898   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.962742   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.462884   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.724058   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:48.727054   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.976315   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.498135546s)
	I1105 19:11:46.976348   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1105 19:11:46.976361   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.498084867s)
	I1105 19:11:46.976386   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.498096252s)
	I1105 19:11:46.976392   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.498054417s)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1105 19:11:46.976395   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1105 19:11:46.976368   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976436   73496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.086553002s)
	I1105 19:11:46.976471   73496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1105 19:11:46.976488   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976506   73496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:46.976551   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:49.054369   73496 ssh_runner.go:235] Completed: which crictl: (2.077794607s)
	I1105 19:11:49.054455   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:49.054480   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.077976168s)
	I1105 19:11:49.054497   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1105 19:11:49.054520   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.054551   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.089648   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.509600   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455021031s)
	I1105 19:11:50.509639   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1105 19:11:50.509664   73496 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509679   73496 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.419997127s)
	I1105 19:11:50.509719   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509751   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.547301   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1105 19:11:50.547416   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:48.035320   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.535496   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.035636   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.535445   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.035499   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.535722   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.035700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.535310   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.035585   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.535468   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.962134   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.463479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.225155   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:53.723881   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:54.139987   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.592545704s)
	I1105 19:11:54.140021   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1105 19:11:54.140038   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.630297093s)
	I1105 19:11:54.140058   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1105 19:11:54.140089   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:54.140150   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:53.034919   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.535697   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.035353   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.534669   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.034957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.534747   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.035331   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.534699   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.465549   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.961291   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.725153   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:58.224417   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.887208   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.747032149s)
	I1105 19:11:55.887247   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1105 19:11:55.887278   73496 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:55.887331   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:57.753834   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.866475995s)
	I1105 19:11:57.753860   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1105 19:11:57.753879   73496 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:57.753917   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:58.605444   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1105 19:11:58.605490   73496 cache_images.go:123] Successfully loaded all cached images
	I1105 19:11:58.605498   73496 cache_images.go:92] duration metric: took 14.920064519s to LoadCachedImages
	I1105 19:11:58.605512   73496 kubeadm.go:934] updating node { 192.168.72.101 8443 v1.31.2 crio true true} ...
	I1105 19:11:58.605627   73496 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-459223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:58.605719   73496 ssh_runner.go:195] Run: crio config
	I1105 19:11:58.654396   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:11:58.654422   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:58.654432   73496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:58.654456   73496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.101 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-459223 NodeName:no-preload-459223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:58.654636   73496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-459223"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.101"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.101"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:58.654714   73496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:58.666580   73496 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:58.666659   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:58.676390   73496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:11:58.692426   73496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:58.708650   73496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1105 19:11:58.727451   73496 ssh_runner.go:195] Run: grep 192.168.72.101	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:58.731200   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:58.743437   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:58.850614   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:58.867662   73496 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223 for IP: 192.168.72.101
	I1105 19:11:58.867694   73496 certs.go:194] generating shared ca certs ...
	I1105 19:11:58.867715   73496 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:58.867896   73496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:58.867954   73496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:58.867988   73496 certs.go:256] generating profile certs ...
	I1105 19:11:58.868073   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/client.key
	I1105 19:11:58.868129   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key.0f61fe1e
	I1105 19:11:58.868163   73496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key
	I1105 19:11:58.868276   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:58.868316   73496 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:58.868323   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:58.868347   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:58.868380   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:58.868409   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:58.868450   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:58.869179   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:58.911433   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:58.947863   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:58.977511   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:59.022637   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 19:11:59.060992   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:59.086516   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:59.109616   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:59.135019   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:59.159832   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:59.184470   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:59.207138   73496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:59.224379   73496 ssh_runner.go:195] Run: openssl version
	I1105 19:11:59.230142   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:59.243624   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248086   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248157   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.253684   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:59.264169   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:59.274837   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279102   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279159   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.284540   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:59.295198   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:59.306105   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310073   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310115   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.315240   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:59.325470   73496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:59.329485   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:59.334985   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:59.340316   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:59.345717   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:59.351082   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:59.356631   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:59.361951   73496 kubeadm.go:392] StartCluster: {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:59.362047   73496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:59.362084   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.398746   73496 cri.go:89] found id: ""
	I1105 19:11:59.398819   73496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:59.408597   73496 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:59.408614   73496 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:59.408656   73496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:59.418082   73496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:59.419128   73496 kubeconfig.go:125] found "no-preload-459223" server: "https://192.168.72.101:8443"
	I1105 19:11:59.421286   73496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:59.430458   73496 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.101
	I1105 19:11:59.430490   73496 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:59.430500   73496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:59.430549   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.464047   73496 cri.go:89] found id: ""
	I1105 19:11:59.464102   73496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:59.480978   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:59.490808   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:59.490829   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:59.490871   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:59.499505   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:59.499559   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:59.508247   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:59.516942   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:59.517005   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:59.525910   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.534349   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:59.534392   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.544212   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:59.553794   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:59.553857   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:59.562739   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:59.571819   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:59.680938   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.564659   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:58.034948   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:58.534748   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.034961   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.535634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.035311   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.534756   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.035266   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.535256   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.035489   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.534701   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.963075   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.462112   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.224544   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:02.225623   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.226711   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.775338   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.844402   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.957534   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:12:00.957630   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.458375   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.958215   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.975834   73496 api_server.go:72] duration metric: took 1.018298528s to wait for apiserver process to appear ...
	I1105 19:12:01.975862   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:12:01.975884   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.774116   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.774149   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.774164   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.825378   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.825427   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.976663   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.984209   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:04.984244   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.476825   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.484608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.484644   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.975985   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.981608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.981639   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:06.476014   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:06.480296   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:12:06.487584   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:12:06.487613   73496 api_server.go:131] duration metric: took 4.511744097s to wait for apiserver health ...
	I1105 19:12:06.487623   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:12:06.487632   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:12:06.489302   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:12:03.034795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:03.534764   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.034833   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.534795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.034815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.534885   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.535327   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.035253   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.535011   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.961693   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.962003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:07.461125   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.724362   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:09.224191   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.490496   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:12:06.500809   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:12:06.529242   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:12:06.542769   73496 system_pods.go:59] 8 kube-system pods found
	I1105 19:12:06.542806   73496 system_pods.go:61] "coredns-7c65d6cfc9-9vvhj" [fde1a6e7-6807-440c-a38d-4f39ede6c11e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:12:06.542818   73496 system_pods.go:61] "etcd-no-preload-459223" [398e3fc3-6902-4cbb-bc50-a72bab461839] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:12:06.542828   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [33a306b0-a41d-4ca3-9d01-69faa7825fe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:12:06.542837   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [865ae24c-d991-4650-9e17-7242f84403e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:12:06.542844   73496 system_pods.go:61] "kube-proxy-6h584" [dd35774f-a245-42af-8fe9-bd6933ad0e30] Running
	I1105 19:12:06.542852   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [27d3685e-d548-49b6-a24d-02b1f8656c66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:12:06.542859   73496 system_pods.go:61] "metrics-server-6867b74b74-5sp2j" [7ddaa66e-b4ba-4241-8dba-5fc6ab66d777] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:12:06.542864   73496 system_pods.go:61] "storage-provisioner" [49786ba3-e9fc-45ad-9418-fd3a0a7b652c] Running
	I1105 19:12:06.542873   73496 system_pods.go:74] duration metric: took 13.603868ms to wait for pod list to return data ...
	I1105 19:12:06.542883   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:12:06.549398   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:12:06.549425   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:12:06.549435   73496 node_conditions.go:105] duration metric: took 6.546615ms to run NodePressure ...
	I1105 19:12:06.549452   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:06.812829   73496 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818052   73496 kubeadm.go:739] kubelet initialised
	I1105 19:12:06.818082   73496 kubeadm.go:740] duration metric: took 5.227942ms waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818093   73496 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:12:06.823883   73496 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.830129   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830164   73496 pod_ready.go:82] duration metric: took 6.253499ms for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.830176   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830187   73496 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.834901   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834942   73496 pod_ready.go:82] duration metric: took 4.743456ms for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.834954   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834988   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.841446   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841474   73496 pod_ready.go:82] duration metric: took 6.472942ms for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.841485   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841494   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.933972   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.933998   73496 pod_ready.go:82] duration metric: took 92.493084ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.934006   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.934012   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333443   73496 pod_ready.go:93] pod "kube-proxy-6h584" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:07.333473   73496 pod_ready.go:82] duration metric: took 399.45278ms for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333486   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:09.339907   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:08.035104   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:08.534784   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.035198   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.535319   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.035258   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.534634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.035604   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.535077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.035096   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.961614   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.962113   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.724418   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.724954   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.839467   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.839725   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.035100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:13.534793   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.035120   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.535318   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.035062   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.535127   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.034840   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.534830   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.035105   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.534928   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.961398   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.224300   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.729666   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.339542   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:17.840399   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:17.840424   73496 pod_ready.go:82] duration metric: took 10.506929493s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:17.840433   73496 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:19.846676   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.035126   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:18.535446   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.035154   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.535413   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.035580   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.534802   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.035030   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.535250   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.034785   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.534700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.460480   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.461609   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.223496   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.224908   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.847279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:24.347279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.034721   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.534672   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.035358   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.534813   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.535342   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.034934   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.534766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.035389   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.534831   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.961556   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.460682   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:25.723807   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:27.724515   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.346351   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:28.035226   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:28.535577   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.034984   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.535633   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.035509   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.534907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.535421   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.034719   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.534952   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:32.535067   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:32.575052   74485 cri.go:89] found id: ""
	I1105 19:12:32.575085   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.575096   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:32.575104   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:32.575164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:32.609969   74485 cri.go:89] found id: ""
	I1105 19:12:32.610003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.610011   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:32.610017   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:32.610065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:32.642343   74485 cri.go:89] found id: ""
	I1105 19:12:32.642369   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.642376   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:32.642381   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:32.642426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:28.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:30.960340   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.725101   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.224788   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:31.346559   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:33.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.680144   74485 cri.go:89] found id: ""
	I1105 19:12:32.680177   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.680188   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:32.680196   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:32.680270   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:32.715216   74485 cri.go:89] found id: ""
	I1105 19:12:32.715248   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.715259   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:32.715267   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:32.715321   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:32.751742   74485 cri.go:89] found id: ""
	I1105 19:12:32.751771   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.751795   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:32.751803   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:32.751865   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:32.786944   74485 cri.go:89] found id: ""
	I1105 19:12:32.787003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.787015   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:32.787023   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:32.787080   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:32.820523   74485 cri.go:89] found id: ""
	I1105 19:12:32.820550   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.820557   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:32.820565   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:32.820575   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:32.873960   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:32.874000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:32.889268   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:32.889296   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:33.011825   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:33.011846   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:33.011862   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:33.082785   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:33.082827   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:35.630678   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:35.644410   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:35.644492   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:35.679567   74485 cri.go:89] found id: ""
	I1105 19:12:35.679598   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.679607   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:35.679613   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:35.679666   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:35.713685   74485 cri.go:89] found id: ""
	I1105 19:12:35.713713   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.713721   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:35.713726   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:35.713789   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:35.749496   74485 cri.go:89] found id: ""
	I1105 19:12:35.749525   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.749536   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:35.749543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:35.749611   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:35.784228   74485 cri.go:89] found id: ""
	I1105 19:12:35.784254   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.784263   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:35.784269   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:35.784317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:35.818620   74485 cri.go:89] found id: ""
	I1105 19:12:35.818680   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.818696   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:35.818703   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:35.818769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:35.852525   74485 cri.go:89] found id: ""
	I1105 19:12:35.852554   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.852566   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:35.852574   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:35.852648   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:35.887906   74485 cri.go:89] found id: ""
	I1105 19:12:35.887931   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.887939   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:35.887944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:35.887994   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:35.920566   74485 cri.go:89] found id: ""
	I1105 19:12:35.920594   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.920602   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:35.920612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:35.920627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:35.972706   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:35.972742   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:35.986114   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:35.986141   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:36.067016   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:36.067044   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:36.067060   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:36.158947   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:36.159003   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:32.962679   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.461449   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:37.462001   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:34.724028   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:36.724174   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.728373   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.848563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.347478   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:40.347899   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.700738   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:38.713280   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:38.713351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:38.747293   74485 cri.go:89] found id: ""
	I1105 19:12:38.747335   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.747347   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:38.747355   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:38.747414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:38.781607   74485 cri.go:89] found id: ""
	I1105 19:12:38.781635   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.781643   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:38.781648   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:38.781703   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:38.815303   74485 cri.go:89] found id: ""
	I1105 19:12:38.815333   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.815342   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:38.815348   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:38.815397   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:38.850128   74485 cri.go:89] found id: ""
	I1105 19:12:38.850156   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.850166   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:38.850174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:38.850233   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:38.882470   74485 cri.go:89] found id: ""
	I1105 19:12:38.882493   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.882500   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:38.882506   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:38.882563   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:38.914669   74485 cri.go:89] found id: ""
	I1105 19:12:38.914698   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.914706   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:38.914713   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:38.914762   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:38.946521   74485 cri.go:89] found id: ""
	I1105 19:12:38.946548   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.946556   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:38.946561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:38.946613   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:38.979628   74485 cri.go:89] found id: ""
	I1105 19:12:38.979655   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.979663   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:38.979672   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:38.979682   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:39.056066   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:39.056102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.092303   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:39.092333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:39.143754   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:39.143790   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:39.156553   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:39.156587   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:39.220882   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:41.721766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:41.734823   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:41.734893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:41.768636   74485 cri.go:89] found id: ""
	I1105 19:12:41.768668   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.768685   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:41.768693   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:41.768750   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:41.809506   74485 cri.go:89] found id: ""
	I1105 19:12:41.809533   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.809541   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:41.809546   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:41.809606   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:41.849953   74485 cri.go:89] found id: ""
	I1105 19:12:41.849977   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.849985   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:41.849991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:41.850037   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:41.893042   74485 cri.go:89] found id: ""
	I1105 19:12:41.893072   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.893084   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:41.893091   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:41.893152   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:41.936259   74485 cri.go:89] found id: ""
	I1105 19:12:41.936282   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.936292   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:41.936298   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:41.936347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:41.970322   74485 cri.go:89] found id: ""
	I1105 19:12:41.970344   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.970353   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:41.970360   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:41.970427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:42.004351   74485 cri.go:89] found id: ""
	I1105 19:12:42.004375   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.004383   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:42.004388   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:42.004443   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:42.035136   74485 cri.go:89] found id: ""
	I1105 19:12:42.035163   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.035174   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:42.035185   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:42.035201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:42.086760   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:42.086801   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:42.100795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:42.100829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:42.167480   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:42.167509   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:42.167529   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:42.248625   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:42.248664   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.961606   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.461423   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:41.224956   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:43.724906   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.846509   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.847235   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.785100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:44.798182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:44.798248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:44.834080   74485 cri.go:89] found id: ""
	I1105 19:12:44.834107   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.834115   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:44.834120   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:44.834179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:44.870572   74485 cri.go:89] found id: ""
	I1105 19:12:44.870602   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.870613   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:44.870620   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:44.870691   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:44.908960   74485 cri.go:89] found id: ""
	I1105 19:12:44.908991   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.909002   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:44.909010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:44.909075   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:44.945310   74485 cri.go:89] found id: ""
	I1105 19:12:44.945342   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.945350   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:44.945355   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:44.945409   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:44.982893   74485 cri.go:89] found id: ""
	I1105 19:12:44.982935   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.982946   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:44.982953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:44.983030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:45.015529   74485 cri.go:89] found id: ""
	I1105 19:12:45.015559   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.015571   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:45.015578   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:45.015640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:45.047252   74485 cri.go:89] found id: ""
	I1105 19:12:45.047284   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.047295   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:45.047302   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:45.047364   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:45.082963   74485 cri.go:89] found id: ""
	I1105 19:12:45.083009   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.083018   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:45.083026   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:45.083039   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:45.131844   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:45.131881   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:45.145500   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:45.145530   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:45.214668   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:45.214709   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:45.214725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:45.291203   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:45.291243   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:44.963672   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.461610   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:46.223849   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:48.225352   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.346007   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:49.346691   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.831908   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:47.844873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:47.844957   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:47.881587   74485 cri.go:89] found id: ""
	I1105 19:12:47.881617   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.881628   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:47.881644   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:47.881714   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:47.918381   74485 cri.go:89] found id: ""
	I1105 19:12:47.918411   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.918423   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:47.918430   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:47.918491   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:47.950835   74485 cri.go:89] found id: ""
	I1105 19:12:47.950864   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.950880   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:47.950889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:47.950947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:47.985234   74485 cri.go:89] found id: ""
	I1105 19:12:47.985261   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.985272   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:47.985279   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:47.985338   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:48.019406   74485 cri.go:89] found id: ""
	I1105 19:12:48.019437   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.019448   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:48.019455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:48.019532   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:48.053126   74485 cri.go:89] found id: ""
	I1105 19:12:48.053160   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.053172   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:48.053180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:48.053241   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:48.086847   74485 cri.go:89] found id: ""
	I1105 19:12:48.086872   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.086879   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:48.086885   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:48.086944   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:48.122366   74485 cri.go:89] found id: ""
	I1105 19:12:48.122388   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.122396   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:48.122404   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:48.122421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:48.171579   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:48.171622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:48.185207   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:48.185234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:48.249553   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:48.249575   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:48.249586   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:48.323391   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:48.323427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:50.861939   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:50.874943   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:50.875041   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:50.911498   74485 cri.go:89] found id: ""
	I1105 19:12:50.911522   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.911530   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:50.911536   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:50.911591   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:50.946936   74485 cri.go:89] found id: ""
	I1105 19:12:50.946962   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.946988   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:50.947034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:50.947098   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:50.983220   74485 cri.go:89] found id: ""
	I1105 19:12:50.983246   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.983258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:50.983265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:50.983314   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:51.017052   74485 cri.go:89] found id: ""
	I1105 19:12:51.017078   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.017086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:51.017092   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:51.017141   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:51.051417   74485 cri.go:89] found id: ""
	I1105 19:12:51.051448   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.051459   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:51.051466   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:51.051529   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:51.085129   74485 cri.go:89] found id: ""
	I1105 19:12:51.085164   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.085177   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:51.085182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:51.085232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:51.122065   74485 cri.go:89] found id: ""
	I1105 19:12:51.122100   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.122113   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:51.122120   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:51.122178   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:51.154909   74485 cri.go:89] found id: ""
	I1105 19:12:51.154938   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.154946   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:51.154954   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:51.154966   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:51.167768   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:51.167798   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:51.231849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:51.231873   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:51.231897   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:51.314426   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:51.314487   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:51.356654   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:51.356685   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:49.961294   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.461707   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:50.723534   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.723821   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:51.347677   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.847328   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.911774   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:53.924884   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:53.924968   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:53.957690   74485 cri.go:89] found id: ""
	I1105 19:12:53.957719   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.957729   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:53.957737   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:53.957802   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:53.990717   74485 cri.go:89] found id: ""
	I1105 19:12:53.990744   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.990751   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:53.990757   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:53.990803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:54.023229   74485 cri.go:89] found id: ""
	I1105 19:12:54.023251   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.023258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:54.023263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:54.023320   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:54.056950   74485 cri.go:89] found id: ""
	I1105 19:12:54.056977   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.056987   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:54.056995   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:54.057056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:54.091729   74485 cri.go:89] found id: ""
	I1105 19:12:54.091756   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.091768   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:54.091776   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:54.091828   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:54.123964   74485 cri.go:89] found id: ""
	I1105 19:12:54.123991   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.124001   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:54.124009   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:54.124070   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:54.155164   74485 cri.go:89] found id: ""
	I1105 19:12:54.155195   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.155204   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:54.155209   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:54.155268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:54.188161   74485 cri.go:89] found id: ""
	I1105 19:12:54.188191   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.188202   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:54.188213   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:54.188226   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:54.240906   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:54.240941   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:54.254061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:54.254093   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:54.321973   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:54.322007   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:54.322026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:54.405106   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:54.405147   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:56.941801   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:56.954658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:56.954741   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:56.990372   74485 cri.go:89] found id: ""
	I1105 19:12:56.990400   74485 logs.go:282] 0 containers: []
	W1105 19:12:56.990411   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:56.990419   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:56.990479   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:57.023047   74485 cri.go:89] found id: ""
	I1105 19:12:57.023082   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.023093   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:57.023102   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:57.023163   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:57.054991   74485 cri.go:89] found id: ""
	I1105 19:12:57.055021   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.055030   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:57.055036   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:57.055094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:57.086182   74485 cri.go:89] found id: ""
	I1105 19:12:57.086214   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.086225   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:57.086233   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:57.086295   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:57.120322   74485 cri.go:89] found id: ""
	I1105 19:12:57.120350   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.120361   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:57.120368   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:57.120431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:57.153751   74485 cri.go:89] found id: ""
	I1105 19:12:57.153781   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.153790   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:57.153796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:57.153845   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:57.189208   74485 cri.go:89] found id: ""
	I1105 19:12:57.189234   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.189244   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:57.189251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:57.189317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:57.223259   74485 cri.go:89] found id: ""
	I1105 19:12:57.223292   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.223301   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:57.223308   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:57.223320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:57.273063   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:57.273098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:57.287759   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:57.287783   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:57.353387   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:57.353409   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:57.353421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:57.426374   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:57.426411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:54.462191   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.960479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:54.723926   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.724988   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.224704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:55.847609   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:58.347062   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.348243   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.965907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:59.979081   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:59.979149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:00.010955   74485 cri.go:89] found id: ""
	I1105 19:13:00.011001   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.011012   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:00.011021   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:00.011081   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:00.044800   74485 cri.go:89] found id: ""
	I1105 19:13:00.044825   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.044832   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:00.044838   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:00.044894   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:00.082999   74485 cri.go:89] found id: ""
	I1105 19:13:00.083040   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.083050   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:00.083059   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:00.083125   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:00.120792   74485 cri.go:89] found id: ""
	I1105 19:13:00.120826   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.120835   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:00.120840   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:00.120903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:00.153156   74485 cri.go:89] found id: ""
	I1105 19:13:00.153188   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.153200   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:00.153207   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:00.153273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:00.189039   74485 cri.go:89] found id: ""
	I1105 19:13:00.189066   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.189073   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:00.189079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:00.189143   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:00.220904   74485 cri.go:89] found id: ""
	I1105 19:13:00.220932   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.220942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:00.220950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:00.221012   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:00.255414   74485 cri.go:89] found id: ""
	I1105 19:13:00.255443   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.255454   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:00.255464   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:00.255480   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:00.329027   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:00.329050   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:00.329061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:00.405813   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:00.405847   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:00.443302   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:00.443332   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:00.498413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:00.498452   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:58.960870   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.962098   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:01.723865   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.724945   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:02.846369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:04.846751   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.011897   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:03.025351   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:03.025419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:03.058881   74485 cri.go:89] found id: ""
	I1105 19:13:03.058910   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.058920   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:03.058928   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:03.059018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:03.093549   74485 cri.go:89] found id: ""
	I1105 19:13:03.093580   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.093592   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:03.093600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:03.093660   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:03.132355   74485 cri.go:89] found id: ""
	I1105 19:13:03.132384   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.132395   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:03.132402   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:03.132463   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:03.164832   74485 cri.go:89] found id: ""
	I1105 19:13:03.164864   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.164875   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:03.164888   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:03.164947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:03.203187   74485 cri.go:89] found id: ""
	I1105 19:13:03.203213   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.203221   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:03.203226   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:03.203282   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:03.238867   74485 cri.go:89] found id: ""
	I1105 19:13:03.238899   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.238921   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:03.238928   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:03.239010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:03.276139   74485 cri.go:89] found id: ""
	I1105 19:13:03.276174   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.276187   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:03.276195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:03.276251   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:03.312588   74485 cri.go:89] found id: ""
	I1105 19:13:03.312613   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.312631   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:03.312639   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:03.312650   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:03.379754   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:03.379782   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:03.379797   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:03.455719   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:03.455754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.493428   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:03.493458   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:03.545447   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:03.545481   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.060213   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:06.074756   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:06.074831   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:06.111392   74485 cri.go:89] found id: ""
	I1105 19:13:06.111421   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.111429   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:06.111435   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:06.111493   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:06.147535   74485 cri.go:89] found id: ""
	I1105 19:13:06.147568   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.147579   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:06.147585   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:06.147646   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:06.183176   74485 cri.go:89] found id: ""
	I1105 19:13:06.183198   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.183205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:06.183211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:06.183262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:06.213957   74485 cri.go:89] found id: ""
	I1105 19:13:06.213983   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.213992   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:06.213997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:06.214060   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:06.251199   74485 cri.go:89] found id: ""
	I1105 19:13:06.251227   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.251234   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:06.251240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:06.251297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:06.288128   74485 cri.go:89] found id: ""
	I1105 19:13:06.288157   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.288167   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:06.288174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:06.288236   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:06.325265   74485 cri.go:89] found id: ""
	I1105 19:13:06.325296   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.325306   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:06.325314   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:06.325375   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:06.359649   74485 cri.go:89] found id: ""
	I1105 19:13:06.359689   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.359700   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:06.359710   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:06.359725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:06.408423   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:06.408456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.421776   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:06.421804   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:06.487464   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:06.487493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:06.487507   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:06.565789   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:06.565829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.461192   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.725002   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:08.225146   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:07.346498   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.347264   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.104578   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:09.117930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:09.118022   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:09.156055   74485 cri.go:89] found id: ""
	I1105 19:13:09.156083   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.156093   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:09.156101   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:09.156161   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:09.190470   74485 cri.go:89] found id: ""
	I1105 19:13:09.190499   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.190509   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:09.190516   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:09.190576   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:09.222568   74485 cri.go:89] found id: ""
	I1105 19:13:09.222595   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.222606   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:09.222612   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:09.222677   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:09.260251   74485 cri.go:89] found id: ""
	I1105 19:13:09.260282   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.260292   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:09.260300   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:09.260362   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:09.296006   74485 cri.go:89] found id: ""
	I1105 19:13:09.296036   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.296047   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:09.296054   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:09.296118   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:09.331213   74485 cri.go:89] found id: ""
	I1105 19:13:09.331246   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.331257   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:09.331265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:09.331333   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:09.364286   74485 cri.go:89] found id: ""
	I1105 19:13:09.364316   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.364327   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:09.364335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:09.364445   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:09.398060   74485 cri.go:89] found id: ""
	I1105 19:13:09.398084   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.398092   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:09.398101   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:09.398113   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:09.447373   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:09.447409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:09.461483   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:09.461514   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:09.528213   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:09.528236   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:09.528248   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:09.607397   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:09.607430   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.146158   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:12.159183   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:12.159262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:12.193917   74485 cri.go:89] found id: ""
	I1105 19:13:12.193952   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.193963   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:12.193971   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:12.194036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:12.226558   74485 cri.go:89] found id: ""
	I1105 19:13:12.226585   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.226594   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:12.226600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:12.226662   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:12.258437   74485 cri.go:89] found id: ""
	I1105 19:13:12.258469   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.258481   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:12.258488   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:12.258557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:12.291308   74485 cri.go:89] found id: ""
	I1105 19:13:12.291341   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.291353   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:12.291361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:12.291431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:12.325768   74485 cri.go:89] found id: ""
	I1105 19:13:12.325801   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.325812   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:12.325819   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:12.325884   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:12.361077   74485 cri.go:89] found id: ""
	I1105 19:13:12.361100   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.361108   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:12.361118   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:12.361179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:12.394769   74485 cri.go:89] found id: ""
	I1105 19:13:12.394791   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.394800   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:12.394806   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:12.394864   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:12.430138   74485 cri.go:89] found id: ""
	I1105 19:13:12.430167   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.430177   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:12.430189   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:12.430200   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.472596   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:12.472637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:12.523107   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:12.523143   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:12.535797   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:12.535824   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:12.604088   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:12.604108   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:12.604123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:08.460647   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.462830   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.225468   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.225693   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:11.849320   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.347487   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:15.185725   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:15.200158   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:15.200238   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:15.238309   74485 cri.go:89] found id: ""
	I1105 19:13:15.238334   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.238342   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:15.238349   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:15.238404   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:15.272897   74485 cri.go:89] found id: ""
	I1105 19:13:15.272927   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.272938   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:15.272945   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:15.273013   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:15.307700   74485 cri.go:89] found id: ""
	I1105 19:13:15.307726   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.307737   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:15.307744   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:15.307810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:15.340156   74485 cri.go:89] found id: ""
	I1105 19:13:15.340182   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.340196   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:15.340202   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:15.340252   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:15.375930   74485 cri.go:89] found id: ""
	I1105 19:13:15.375963   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.375971   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:15.375976   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:15.376031   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:15.409876   74485 cri.go:89] found id: ""
	I1105 19:13:15.409905   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.409915   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:15.409922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:15.409984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:15.442781   74485 cri.go:89] found id: ""
	I1105 19:13:15.442808   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.442819   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:15.442825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:15.442896   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:15.480578   74485 cri.go:89] found id: ""
	I1105 19:13:15.480606   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.480614   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:15.480623   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:15.480634   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:15.530910   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:15.530952   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:15.544351   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:15.544382   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:15.618345   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:15.618373   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:15.618396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:15.704408   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:15.704451   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:14.961408   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.961486   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.724130   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.724204   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.724704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.347818   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.846423   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.244882   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:18.258667   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:18.258758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:18.292140   74485 cri.go:89] found id: ""
	I1105 19:13:18.292163   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.292171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:18.292178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:18.292235   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:18.324954   74485 cri.go:89] found id: ""
	I1105 19:13:18.324979   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.324985   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:18.324991   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:18.325048   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:18.361943   74485 cri.go:89] found id: ""
	I1105 19:13:18.361972   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.361983   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:18.361991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:18.362062   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:18.396012   74485 cri.go:89] found id: ""
	I1105 19:13:18.396036   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.396044   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:18.396050   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:18.396097   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:18.428852   74485 cri.go:89] found id: ""
	I1105 19:13:18.428875   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.428883   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:18.428889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:18.428946   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:18.464364   74485 cri.go:89] found id: ""
	I1105 19:13:18.464390   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.464397   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:18.464404   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:18.464464   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:18.496478   74485 cri.go:89] found id: ""
	I1105 19:13:18.496505   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.496514   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:18.496519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:18.496577   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:18.530313   74485 cri.go:89] found id: ""
	I1105 19:13:18.530339   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.530348   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:18.530356   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:18.530368   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:18.582593   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:18.582627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:18.596580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:18.596616   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:18.663920   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:18.663959   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:18.663974   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:18.740706   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:18.740746   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.281614   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:21.295841   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:21.295919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:21.330832   74485 cri.go:89] found id: ""
	I1105 19:13:21.330856   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.330864   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:21.330869   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:21.330922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:21.365228   74485 cri.go:89] found id: ""
	I1105 19:13:21.365257   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.365265   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:21.365269   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:21.365317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:21.418675   74485 cri.go:89] found id: ""
	I1105 19:13:21.418702   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.418719   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:21.418727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:21.418793   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:21.453966   74485 cri.go:89] found id: ""
	I1105 19:13:21.453994   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.454003   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:21.454008   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:21.454058   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:21.492030   74485 cri.go:89] found id: ""
	I1105 19:13:21.492056   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.492067   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:21.492078   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:21.492128   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:21.529146   74485 cri.go:89] found id: ""
	I1105 19:13:21.529174   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.529183   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:21.529190   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:21.529250   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:21.566491   74485 cri.go:89] found id: ""
	I1105 19:13:21.566519   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.566528   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:21.566533   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:21.566595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:21.605720   74485 cri.go:89] found id: ""
	I1105 19:13:21.605745   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.605754   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:21.605762   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:21.605772   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:21.682385   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:21.682408   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:21.682420   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:21.764519   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:21.764557   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.805090   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:21.805117   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:21.857560   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:21.857593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:19.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.961995   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.224702   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.226864   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:20.850915   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.346819   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.347230   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:24.371420   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:24.384566   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:24.384634   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:24.416283   74485 cri.go:89] found id: ""
	I1105 19:13:24.416308   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.416319   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:24.416327   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:24.416388   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:24.452875   74485 cri.go:89] found id: ""
	I1105 19:13:24.452899   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.452907   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:24.452913   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:24.452964   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:24.489946   74485 cri.go:89] found id: ""
	I1105 19:13:24.489974   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.489992   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:24.490000   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:24.490056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:24.527348   74485 cri.go:89] found id: ""
	I1105 19:13:24.527377   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.527388   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:24.527395   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:24.527451   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:24.558992   74485 cri.go:89] found id: ""
	I1105 19:13:24.559024   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.559035   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:24.559047   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:24.559105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:24.591405   74485 cri.go:89] found id: ""
	I1105 19:13:24.591437   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.591448   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:24.591455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:24.591516   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.625002   74485 cri.go:89] found id: ""
	I1105 19:13:24.625031   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.625040   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:24.625048   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:24.625114   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:24.657867   74485 cri.go:89] found id: ""
	I1105 19:13:24.657896   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.657907   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:24.657918   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:24.657931   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:24.708444   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:24.708482   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:24.721771   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:24.721814   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:24.793946   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:24.793980   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:24.793996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:24.875130   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:24.875167   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:27.412872   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:27.426996   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:27.427072   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:27.462434   74485 cri.go:89] found id: ""
	I1105 19:13:27.462458   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.462468   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:27.462475   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:27.462536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:27.496916   74485 cri.go:89] found id: ""
	I1105 19:13:27.496951   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.496962   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:27.496969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:27.497035   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:27.528826   74485 cri.go:89] found id: ""
	I1105 19:13:27.528853   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.528861   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:27.528867   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:27.528919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:27.563164   74485 cri.go:89] found id: ""
	I1105 19:13:27.563193   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.563204   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:27.563210   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:27.563284   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:27.600136   74485 cri.go:89] found id: ""
	I1105 19:13:27.600164   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.600174   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:27.600180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:27.600247   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:27.634326   74485 cri.go:89] found id: ""
	I1105 19:13:27.634358   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.634368   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:27.634377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:27.634452   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.462295   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:26.961567   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.723935   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.725498   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.847362   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.349542   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.668154   74485 cri.go:89] found id: ""
	I1105 19:13:27.668185   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.668196   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:27.668203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:27.668263   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:27.706016   74485 cri.go:89] found id: ""
	I1105 19:13:27.706043   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.706051   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:27.706059   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:27.706071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:27.755890   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:27.755929   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:27.773038   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:27.773063   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:27.863392   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:27.863414   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:27.863429   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:27.949149   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:27.949185   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.489333   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:30.502794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:30.502878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:30.536263   74485 cri.go:89] found id: ""
	I1105 19:13:30.536289   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.536297   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:30.536302   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:30.536347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:30.570418   74485 cri.go:89] found id: ""
	I1105 19:13:30.570445   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.570455   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:30.570462   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:30.570523   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:30.601972   74485 cri.go:89] found id: ""
	I1105 19:13:30.602003   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.602013   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:30.602020   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:30.602086   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:30.634151   74485 cri.go:89] found id: ""
	I1105 19:13:30.634183   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.634195   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:30.634203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:30.634265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:30.666384   74485 cri.go:89] found id: ""
	I1105 19:13:30.666415   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.666425   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:30.666433   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:30.666498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:30.699587   74485 cri.go:89] found id: ""
	I1105 19:13:30.699619   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.699631   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:30.699639   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:30.699699   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:30.731917   74485 cri.go:89] found id: ""
	I1105 19:13:30.731972   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.731983   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:30.731990   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:30.732051   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:30.768807   74485 cri.go:89] found id: ""
	I1105 19:13:30.768832   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.768840   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:30.768849   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:30.768860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:30.848594   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:30.848626   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.889031   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:30.889067   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:30.940550   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:30.940588   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:30.953810   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:30.953845   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:31.023633   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:29.461686   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:31.961484   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.225024   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.723965   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.847298   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:35.347135   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:33.524150   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:33.539025   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:33.539112   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:33.584756   74485 cri.go:89] found id: ""
	I1105 19:13:33.584786   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.584799   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:33.584807   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:33.584869   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:33.624785   74485 cri.go:89] found id: ""
	I1105 19:13:33.624816   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.624829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:33.624836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:33.625025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:33.668750   74485 cri.go:89] found id: ""
	I1105 19:13:33.668783   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.668794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:33.668804   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:33.668867   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:33.701675   74485 cri.go:89] found id: ""
	I1105 19:13:33.701707   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.701735   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:33.701743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:33.701817   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:33.737368   74485 cri.go:89] found id: ""
	I1105 19:13:33.737393   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.737401   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:33.737407   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:33.737458   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:33.770589   74485 cri.go:89] found id: ""
	I1105 19:13:33.770620   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.770630   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:33.770638   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:33.770704   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:33.802635   74485 cri.go:89] found id: ""
	I1105 19:13:33.802668   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.802680   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:33.802687   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:33.802751   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:33.839274   74485 cri.go:89] found id: ""
	I1105 19:13:33.839301   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.839309   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:33.839317   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:33.839328   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:33.881049   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:33.881090   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:33.932704   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:33.932743   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:33.945979   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:33.946007   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:34.017355   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:34.017375   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:34.017390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:36.596284   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:36.608240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:36.608306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:36.641846   74485 cri.go:89] found id: ""
	I1105 19:13:36.641878   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.641887   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:36.641901   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:36.641966   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:36.676553   74485 cri.go:89] found id: ""
	I1105 19:13:36.676584   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.676595   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:36.676602   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:36.676669   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:36.711931   74485 cri.go:89] found id: ""
	I1105 19:13:36.711961   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.711972   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:36.711980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:36.712042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:36.748510   74485 cri.go:89] found id: ""
	I1105 19:13:36.748534   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.748542   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:36.748547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:36.748596   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:36.781869   74485 cri.go:89] found id: ""
	I1105 19:13:36.781899   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.781912   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:36.781922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:36.781983   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:36.816574   74485 cri.go:89] found id: ""
	I1105 19:13:36.816597   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.816605   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:36.816610   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:36.816658   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:36.852894   74485 cri.go:89] found id: ""
	I1105 19:13:36.852921   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.852928   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:36.852934   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:36.852996   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:36.891732   74485 cri.go:89] found id: ""
	I1105 19:13:36.891764   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.891783   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:36.891795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:36.891810   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:36.964948   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:36.964972   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:36.964987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:37.043727   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:37.043765   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:37.084306   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:37.084333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:37.133238   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:37.133274   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:34.461773   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:36.960440   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:34.724805   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.224830   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.227912   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.347383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.347770   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.647492   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:39.659944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:39.660025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:39.695382   74485 cri.go:89] found id: ""
	I1105 19:13:39.695405   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.695415   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:39.695422   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:39.695480   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:39.731807   74485 cri.go:89] found id: ""
	I1105 19:13:39.731833   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.731841   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:39.731846   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:39.731895   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:39.766913   74485 cri.go:89] found id: ""
	I1105 19:13:39.766945   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.766955   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:39.766963   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:39.767049   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:39.800265   74485 cri.go:89] found id: ""
	I1105 19:13:39.800288   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.800296   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:39.800301   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:39.800346   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:39.832753   74485 cri.go:89] found id: ""
	I1105 19:13:39.832781   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.832789   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:39.832794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:39.832843   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:39.865950   74485 cri.go:89] found id: ""
	I1105 19:13:39.865980   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.865990   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:39.865997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:39.866046   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:39.902918   74485 cri.go:89] found id: ""
	I1105 19:13:39.902948   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.902957   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:39.902962   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:39.903039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:39.935086   74485 cri.go:89] found id: ""
	I1105 19:13:39.935117   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.935129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:39.935139   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:39.935152   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:39.997935   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:39.997961   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:39.997976   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:40.076794   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:40.076852   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:40.114178   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:40.114209   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:40.163512   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:40.163550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:38.961003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:40.962241   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.724237   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:43.725317   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.847149   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:44.346097   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:42.676843   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:42.689855   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:42.689930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:42.724108   74485 cri.go:89] found id: ""
	I1105 19:13:42.724139   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.724148   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:42.724156   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:42.724218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:42.760816   74485 cri.go:89] found id: ""
	I1105 19:13:42.760844   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.760854   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:42.760861   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:42.760924   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:42.795111   74485 cri.go:89] found id: ""
	I1105 19:13:42.795134   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.795142   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:42.795147   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:42.795195   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:42.832964   74485 cri.go:89] found id: ""
	I1105 19:13:42.832988   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.832997   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:42.833003   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:42.833065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:42.868817   74485 cri.go:89] found id: ""
	I1105 19:13:42.868848   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.868858   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:42.868865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:42.868933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:42.902015   74485 cri.go:89] found id: ""
	I1105 19:13:42.902044   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.902051   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:42.902056   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:42.902146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:42.934298   74485 cri.go:89] found id: ""
	I1105 19:13:42.934322   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.934330   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:42.934335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:42.934385   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:42.969804   74485 cri.go:89] found id: ""
	I1105 19:13:42.969831   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.969843   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:42.969854   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:42.969873   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:43.019922   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:43.019959   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:43.033594   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:43.033622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:43.108220   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:43.108240   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:43.108251   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:43.191946   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:43.191987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:45.730728   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:45.743344   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:45.743419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:45.777693   74485 cri.go:89] found id: ""
	I1105 19:13:45.777728   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.777739   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:45.777747   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:45.777810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:45.810195   74485 cri.go:89] found id: ""
	I1105 19:13:45.810222   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.810233   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:45.810240   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:45.810308   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:45.851210   74485 cri.go:89] found id: ""
	I1105 19:13:45.851240   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.851247   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:45.851252   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:45.851311   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:45.885501   74485 cri.go:89] found id: ""
	I1105 19:13:45.885531   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.885540   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:45.885546   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:45.885595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:45.921638   74485 cri.go:89] found id: ""
	I1105 19:13:45.921667   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.921676   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:45.921684   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:45.921745   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:45.954341   74485 cri.go:89] found id: ""
	I1105 19:13:45.954373   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.954384   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:45.954394   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:45.954461   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:45.988840   74485 cri.go:89] found id: ""
	I1105 19:13:45.988865   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.988873   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:45.988879   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:45.988949   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:46.025409   74485 cri.go:89] found id: ""
	I1105 19:13:46.025441   74485 logs.go:282] 0 containers: []
	W1105 19:13:46.025458   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:46.025470   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:46.025486   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:46.037763   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:46.037787   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:46.112619   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:46.112663   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:46.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:46.192165   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:46.192199   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:46.233235   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:46.233263   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:42.962569   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:45.461256   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:47.461781   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.225004   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.723774   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.346687   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.787685   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:48.800681   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:48.800749   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:48.835344   74485 cri.go:89] found id: ""
	I1105 19:13:48.835366   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.835374   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:48.835383   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:48.835429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:48.867447   74485 cri.go:89] found id: ""
	I1105 19:13:48.867474   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.867483   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:48.867488   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:48.867536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:48.899135   74485 cri.go:89] found id: ""
	I1105 19:13:48.899160   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.899167   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:48.899172   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:48.899221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:48.932208   74485 cri.go:89] found id: ""
	I1105 19:13:48.932243   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.932255   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:48.932263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:48.932326   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:48.967174   74485 cri.go:89] found id: ""
	I1105 19:13:48.967202   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.967210   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:48.967215   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:48.967267   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:48.998902   74485 cri.go:89] found id: ""
	I1105 19:13:48.998932   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.998942   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:48.998950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:48.999030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:49.030946   74485 cri.go:89] found id: ""
	I1105 19:13:49.030988   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.030999   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:49.031006   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:49.031074   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:49.063489   74485 cri.go:89] found id: ""
	I1105 19:13:49.063517   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.063528   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:49.063540   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:49.063555   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:49.116433   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:49.116477   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:49.131439   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:49.131476   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:49.199770   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:49.199795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:49.199809   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:49.275503   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:49.275543   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:51.816208   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:51.829328   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:51.829399   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:51.863320   74485 cri.go:89] found id: ""
	I1105 19:13:51.863346   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.863354   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:51.863359   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:51.863406   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:51.896589   74485 cri.go:89] found id: ""
	I1105 19:13:51.896618   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.896628   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:51.896635   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:51.896697   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:51.933744   74485 cri.go:89] found id: ""
	I1105 19:13:51.933769   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.933776   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:51.933781   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:51.933829   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:51.970806   74485 cri.go:89] found id: ""
	I1105 19:13:51.970829   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.970836   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:51.970842   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:51.970889   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:52.004087   74485 cri.go:89] found id: ""
	I1105 19:13:52.004116   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.004124   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:52.004129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:52.004186   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:52.041721   74485 cri.go:89] found id: ""
	I1105 19:13:52.041752   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.041763   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:52.041771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:52.041835   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:52.079253   74485 cri.go:89] found id: ""
	I1105 19:13:52.079277   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.079285   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:52.079292   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:52.079351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:52.112604   74485 cri.go:89] found id: ""
	I1105 19:13:52.112642   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.112653   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:52.112664   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:52.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:52.160799   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:52.160841   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:52.174323   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:52.174355   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:52.247358   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:52.247383   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:52.247395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:52.326071   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:52.326108   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:49.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.461239   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.724514   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.724742   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.848418   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:53.346329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.347199   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:54.866454   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:54.879015   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:54.879093   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:54.911729   74485 cri.go:89] found id: ""
	I1105 19:13:54.911765   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.911777   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:54.911785   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:54.911846   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:54.943137   74485 cri.go:89] found id: ""
	I1105 19:13:54.943169   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.943185   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:54.943193   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:54.943253   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:54.977951   74485 cri.go:89] found id: ""
	I1105 19:13:54.977980   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.977991   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:54.977998   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:54.978061   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:55.009453   74485 cri.go:89] found id: ""
	I1105 19:13:55.009478   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.009486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:55.009491   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:55.009537   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:55.040790   74485 cri.go:89] found id: ""
	I1105 19:13:55.040814   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.040821   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:55.040827   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:55.040878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:55.073401   74485 cri.go:89] found id: ""
	I1105 19:13:55.073430   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.073441   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:55.073449   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:55.073508   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:55.105419   74485 cri.go:89] found id: ""
	I1105 19:13:55.105443   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.105451   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:55.105456   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:55.105511   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:55.137363   74485 cri.go:89] found id: ""
	I1105 19:13:55.137395   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.137406   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:55.137416   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:55.137431   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:55.174176   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:55.174201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:55.221658   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:55.221693   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:55.235044   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:55.235070   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:55.308192   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:55.308218   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:55.308234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:54.461424   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:56.961198   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.223920   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.224915   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.847329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:00.347371   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.892462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:57.905472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:57.905543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:57.946044   74485 cri.go:89] found id: ""
	I1105 19:13:57.946071   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.946081   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:57.946089   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:57.946149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:57.980762   74485 cri.go:89] found id: ""
	I1105 19:13:57.980791   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.980803   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:57.980811   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:57.980874   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:58.013351   74485 cri.go:89] found id: ""
	I1105 19:13:58.013374   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.013381   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:58.013386   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:58.013433   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:58.049056   74485 cri.go:89] found id: ""
	I1105 19:13:58.049083   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.049091   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:58.049097   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:58.049147   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:58.081476   74485 cri.go:89] found id: ""
	I1105 19:13:58.081507   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.081517   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:58.081524   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:58.081583   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:58.114526   74485 cri.go:89] found id: ""
	I1105 19:13:58.114554   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.114564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:58.114571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:58.114630   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:58.148219   74485 cri.go:89] found id: ""
	I1105 19:13:58.148243   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.148252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:58.148257   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:58.148312   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:58.183254   74485 cri.go:89] found id: ""
	I1105 19:13:58.183277   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.183285   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:58.183292   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:58.183304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:58.234747   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:58.234785   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:58.248269   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:58.248300   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:58.313290   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:58.313312   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:58.313327   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:58.389847   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:58.389889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:00.927957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:00.941525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:00.941593   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:00.974891   74485 cri.go:89] found id: ""
	I1105 19:14:00.974920   74485 logs.go:282] 0 containers: []
	W1105 19:14:00.974931   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:00.974938   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:00.975018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:01.008224   74485 cri.go:89] found id: ""
	I1105 19:14:01.008250   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.008262   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:01.008270   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:01.008328   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:01.044514   74485 cri.go:89] found id: ""
	I1105 19:14:01.044545   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.044553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:01.044559   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:01.044614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:01.077091   74485 cri.go:89] found id: ""
	I1105 19:14:01.077124   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.077135   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:01.077141   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:01.077197   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:01.109947   74485 cri.go:89] found id: ""
	I1105 19:14:01.109976   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.109986   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:01.109994   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:01.110054   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:01.146162   74485 cri.go:89] found id: ""
	I1105 19:14:01.146193   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.146203   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:01.146211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:01.146275   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:01.180335   74485 cri.go:89] found id: ""
	I1105 19:14:01.180360   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.180370   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:01.180377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:01.180436   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:01.216160   74485 cri.go:89] found id: ""
	I1105 19:14:01.216189   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.216199   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:01.216221   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:01.216236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:01.229426   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:01.229455   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:01.298847   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:01.298874   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:01.298889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:01.375255   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:01.375299   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:01.417946   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:01.418026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:59.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.961362   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:59.724103   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.724976   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.725344   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:02.349032   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:04.847734   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.973713   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:03.987128   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:03.987198   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:04.020050   74485 cri.go:89] found id: ""
	I1105 19:14:04.020081   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.020091   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:04.020098   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:04.020164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:04.053458   74485 cri.go:89] found id: ""
	I1105 19:14:04.053485   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.053492   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:04.053498   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:04.053544   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:04.086417   74485 cri.go:89] found id: ""
	I1105 19:14:04.086442   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.086455   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:04.086461   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:04.086513   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:04.122035   74485 cri.go:89] found id: ""
	I1105 19:14:04.122059   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.122067   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:04.122073   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:04.122120   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:04.158732   74485 cri.go:89] found id: ""
	I1105 19:14:04.158758   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.158765   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:04.158771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:04.158822   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:04.190497   74485 cri.go:89] found id: ""
	I1105 19:14:04.190525   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.190536   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:04.190543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:04.190604   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:04.222040   74485 cri.go:89] found id: ""
	I1105 19:14:04.222066   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.222074   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:04.222079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:04.222131   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:04.258753   74485 cri.go:89] found id: ""
	I1105 19:14:04.258781   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.258793   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:04.258804   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:04.258819   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:04.299966   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:04.300052   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:04.355364   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:04.355395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:04.368954   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:04.368980   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:04.431658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:04.431688   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:04.431700   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.015289   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:07.029580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:07.029644   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:07.066931   74485 cri.go:89] found id: ""
	I1105 19:14:07.066964   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.066993   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:07.067004   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:07.067059   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:07.104315   74485 cri.go:89] found id: ""
	I1105 19:14:07.104341   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.104349   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:07.104354   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:07.104401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:07.141271   74485 cri.go:89] found id: ""
	I1105 19:14:07.141298   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.141305   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:07.141311   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:07.141360   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:07.174600   74485 cri.go:89] found id: ""
	I1105 19:14:07.174631   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.174643   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:07.174653   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:07.174707   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:07.211920   74485 cri.go:89] found id: ""
	I1105 19:14:07.211958   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.211969   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:07.211975   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:07.212027   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:07.248238   74485 cri.go:89] found id: ""
	I1105 19:14:07.248269   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.248280   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:07.248286   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:07.248344   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:07.279833   74485 cri.go:89] found id: ""
	I1105 19:14:07.279864   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.279874   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:07.279881   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:07.279931   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:07.317411   74485 cri.go:89] found id: ""
	I1105 19:14:07.317441   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.317452   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:07.317461   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:07.317474   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:07.390499   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:07.390535   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:07.390556   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.488858   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:07.488895   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:07.528612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:07.528645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:07.581884   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:07.581927   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:03.961433   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.460953   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.223402   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:08.723797   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:07.348258   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:09.846465   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.096089   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:10.110828   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:10.110898   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:10.147299   74485 cri.go:89] found id: ""
	I1105 19:14:10.147332   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.147344   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:10.147350   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:10.147401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:10.181457   74485 cri.go:89] found id: ""
	I1105 19:14:10.181482   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.181489   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:10.181495   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:10.181540   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:10.215210   74485 cri.go:89] found id: ""
	I1105 19:14:10.215241   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.215252   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:10.215259   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:10.215319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:10.249587   74485 cri.go:89] found id: ""
	I1105 19:14:10.249609   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.249617   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:10.249625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:10.249679   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:10.282566   74485 cri.go:89] found id: ""
	I1105 19:14:10.282591   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.282598   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:10.282604   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:10.282672   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:10.314312   74485 cri.go:89] found id: ""
	I1105 19:14:10.314344   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.314355   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:10.314361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:10.314415   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:10.346988   74485 cri.go:89] found id: ""
	I1105 19:14:10.347016   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.347028   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:10.347035   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:10.347088   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:10.381326   74485 cri.go:89] found id: ""
	I1105 19:14:10.381354   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.381370   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:10.381380   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:10.381394   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:10.418311   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:10.418344   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:10.469559   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:10.469590   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:10.482394   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:10.482427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:10.551831   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:10.551854   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:10.551870   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:08.462072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.961478   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:12.724974   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:11.846737   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:14.346050   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:13.127576   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:13.143182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:13.143242   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:13.188794   74485 cri.go:89] found id: ""
	I1105 19:14:13.188827   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.188839   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:13.188846   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:13.188897   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:13.221790   74485 cri.go:89] found id: ""
	I1105 19:14:13.221818   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.221829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:13.221836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:13.221893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:13.255164   74485 cri.go:89] found id: ""
	I1105 19:14:13.255194   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.255205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:13.255212   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:13.255272   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:13.288203   74485 cri.go:89] found id: ""
	I1105 19:14:13.288231   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.288241   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:13.288249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:13.288307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:13.321438   74485 cri.go:89] found id: ""
	I1105 19:14:13.321463   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.321475   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:13.321482   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:13.321541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:13.361858   74485 cri.go:89] found id: ""
	I1105 19:14:13.361886   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.361897   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:13.361905   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:13.361979   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:13.394210   74485 cri.go:89] found id: ""
	I1105 19:14:13.394239   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.394252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:13.394260   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:13.394324   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:13.434665   74485 cri.go:89] found id: ""
	I1105 19:14:13.434697   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.434705   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:13.434712   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:13.434724   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:13.447849   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:13.447875   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:13.514353   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:13.514377   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:13.514390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:13.590746   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:13.590784   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:13.627704   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:13.627732   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:16.180171   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:16.193282   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:16.193342   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:16.230087   74485 cri.go:89] found id: ""
	I1105 19:14:16.230118   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.230128   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:16.230137   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:16.230200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:16.264315   74485 cri.go:89] found id: ""
	I1105 19:14:16.264348   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.264360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:16.264368   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:16.264429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:16.298197   74485 cri.go:89] found id: ""
	I1105 19:14:16.298231   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.298243   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:16.298251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:16.298316   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:16.333149   74485 cri.go:89] found id: ""
	I1105 19:14:16.333180   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.333193   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:16.333203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:16.333268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:16.366863   74485 cri.go:89] found id: ""
	I1105 19:14:16.366887   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.366895   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:16.366900   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:16.366947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:16.400434   74485 cri.go:89] found id: ""
	I1105 19:14:16.400458   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.400466   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:16.400472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:16.400524   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:16.435475   74485 cri.go:89] found id: ""
	I1105 19:14:16.435497   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.435504   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:16.435510   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:16.435560   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:16.470577   74485 cri.go:89] found id: ""
	I1105 19:14:16.470604   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.470612   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:16.470620   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:16.470632   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:16.483061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:16.483094   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:16.550662   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:16.550690   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:16.550702   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:16.629372   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:16.629411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:16.669488   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:16.669526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:12.961576   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.461132   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.461748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.224068   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.225065   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:16.347305   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:18.847161   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.219244   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:19.232682   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:19.232744   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:19.264594   74485 cri.go:89] found id: ""
	I1105 19:14:19.264624   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.264635   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:19.264649   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:19.264708   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:19.301434   74485 cri.go:89] found id: ""
	I1105 19:14:19.301468   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.301479   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:19.301487   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:19.301558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:19.333465   74485 cri.go:89] found id: ""
	I1105 19:14:19.333494   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.333502   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:19.333508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:19.333558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:19.365865   74485 cri.go:89] found id: ""
	I1105 19:14:19.365892   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.365900   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:19.365906   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:19.365958   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:19.406533   74485 cri.go:89] found id: ""
	I1105 19:14:19.406563   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.406575   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:19.406583   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:19.406639   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:19.439351   74485 cri.go:89] found id: ""
	I1105 19:14:19.439377   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.439386   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:19.439392   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:19.439438   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:19.475033   74485 cri.go:89] found id: ""
	I1105 19:14:19.475058   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.475065   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:19.475070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:19.475119   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:19.508638   74485 cri.go:89] found id: ""
	I1105 19:14:19.508662   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.508670   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:19.508678   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:19.508689   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:19.588268   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:19.588293   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:19.588304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:19.671382   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:19.671415   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:19.716497   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:19.716526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:19.769686   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:19.769722   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.283476   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:22.296393   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:22.296456   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:22.331226   74485 cri.go:89] found id: ""
	I1105 19:14:22.331247   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.331255   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:22.331261   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:22.331306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:22.363466   74485 cri.go:89] found id: ""
	I1105 19:14:22.363499   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.363510   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:22.363518   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:22.363586   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:22.397025   74485 cri.go:89] found id: ""
	I1105 19:14:22.397052   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.397061   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:22.397066   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:22.397116   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:22.429450   74485 cri.go:89] found id: ""
	I1105 19:14:22.429476   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.429486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:22.429493   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:22.429554   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:22.461615   74485 cri.go:89] found id: ""
	I1105 19:14:22.461643   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.461654   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:22.461660   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:22.461728   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:22.492470   74485 cri.go:89] found id: ""
	I1105 19:14:22.492502   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.492513   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:22.492521   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:22.492587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:22.525335   74485 cri.go:89] found id: ""
	I1105 19:14:22.525358   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.525366   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:22.525372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:22.525423   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:22.558854   74485 cri.go:89] found id: ""
	I1105 19:14:22.558881   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.558890   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:22.558901   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:22.558916   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:22.608638   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:22.608674   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.621769   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:22.621800   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:14:19.461812   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.960286   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.724482   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:22.224505   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:24.225072   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.347018   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:23.347099   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	W1105 19:14:22.688971   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:22.688998   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:22.689012   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:22.770517   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:22.770558   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:25.315778   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:25.335372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:25.335444   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:25.383988   74485 cri.go:89] found id: ""
	I1105 19:14:25.384019   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.384029   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:25.384036   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:25.384096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:25.432070   74485 cri.go:89] found id: ""
	I1105 19:14:25.432103   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.432115   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:25.432122   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:25.432184   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:25.464859   74485 cri.go:89] found id: ""
	I1105 19:14:25.464891   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.464902   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:25.464909   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:25.464976   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:25.498684   74485 cri.go:89] found id: ""
	I1105 19:14:25.498712   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.498719   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:25.498724   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:25.498777   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:25.532998   74485 cri.go:89] found id: ""
	I1105 19:14:25.533023   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.533032   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:25.533039   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:25.533084   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:25.568101   74485 cri.go:89] found id: ""
	I1105 19:14:25.568130   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.568138   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:25.568144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:25.568208   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:25.600470   74485 cri.go:89] found id: ""
	I1105 19:14:25.600495   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.600503   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:25.600509   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:25.600564   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:25.631792   74485 cri.go:89] found id: ""
	I1105 19:14:25.631824   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.631834   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:25.631845   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:25.631860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:25.683820   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:25.683856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:25.698066   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:25.698095   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:25.764838   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:25.764869   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:25.764886   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:25.838791   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:25.838828   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:23.966002   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.460153   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.724324   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:29.223490   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:25.847528   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.346739   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.376183   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:28.389686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:28.389760   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:28.424180   74485 cri.go:89] found id: ""
	I1105 19:14:28.424209   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.424221   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:28.424229   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:28.424289   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:28.462742   74485 cri.go:89] found id: ""
	I1105 19:14:28.462765   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.462777   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:28.462784   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:28.462839   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:28.494550   74485 cri.go:89] found id: ""
	I1105 19:14:28.494574   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.494581   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:28.494588   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:28.494667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:28.525606   74485 cri.go:89] found id: ""
	I1105 19:14:28.525632   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.525639   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:28.525645   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:28.525696   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:28.558599   74485 cri.go:89] found id: ""
	I1105 19:14:28.558628   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.558638   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:28.558644   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:28.558701   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:28.590496   74485 cri.go:89] found id: ""
	I1105 19:14:28.590522   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.590530   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:28.590535   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:28.590599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:28.622748   74485 cri.go:89] found id: ""
	I1105 19:14:28.622772   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.622780   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:28.622786   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:28.622836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:28.656452   74485 cri.go:89] found id: ""
	I1105 19:14:28.656477   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.656485   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:28.656493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:28.656504   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.736458   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:28.736505   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:28.771923   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:28.771954   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:28.821099   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:28.821133   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:28.834698   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:28.834726   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:28.900543   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.400733   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:31.414573   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:31.414647   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:31.452244   74485 cri.go:89] found id: ""
	I1105 19:14:31.452275   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.452286   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:31.452293   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:31.452353   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:31.485898   74485 cri.go:89] found id: ""
	I1105 19:14:31.485920   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.485935   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:31.485940   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:31.486009   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:31.522826   74485 cri.go:89] found id: ""
	I1105 19:14:31.522850   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.522858   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:31.522865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:31.522925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:31.560096   74485 cri.go:89] found id: ""
	I1105 19:14:31.560136   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.560164   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:31.560174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:31.560234   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:31.596698   74485 cri.go:89] found id: ""
	I1105 19:14:31.596725   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.596733   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:31.596738   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:31.596792   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:31.635109   74485 cri.go:89] found id: ""
	I1105 19:14:31.635138   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.635148   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:31.635156   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:31.635221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:31.667612   74485 cri.go:89] found id: ""
	I1105 19:14:31.667639   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.667651   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:31.667658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:31.667726   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:31.699815   74485 cri.go:89] found id: ""
	I1105 19:14:31.699844   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.699854   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:31.699864   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:31.699879   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:31.737165   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:31.737196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:31.788513   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:31.788550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:31.801580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:31.801609   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:31.871658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.871683   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:31.871696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.462108   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.961875   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:31.223977   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:33.724027   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.847090   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:32.847233   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.847857   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.450954   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:34.466129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:34.466204   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:34.499984   74485 cri.go:89] found id: ""
	I1105 19:14:34.500009   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.500020   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:34.500027   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:34.500091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:34.532923   74485 cri.go:89] found id: ""
	I1105 19:14:34.532950   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.532958   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:34.532969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:34.533017   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:34.566772   74485 cri.go:89] found id: ""
	I1105 19:14:34.566803   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.566811   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:34.566817   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:34.566872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:34.607398   74485 cri.go:89] found id: ""
	I1105 19:14:34.607422   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.607430   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:34.607435   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:34.607497   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:34.640091   74485 cri.go:89] found id: ""
	I1105 19:14:34.640123   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.640135   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:34.640143   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:34.640207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:34.677164   74485 cri.go:89] found id: ""
	I1105 19:14:34.677201   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.677211   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:34.677217   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:34.677266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:34.714900   74485 cri.go:89] found id: ""
	I1105 19:14:34.714931   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.714942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:34.714949   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:34.715023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:34.751003   74485 cri.go:89] found id: ""
	I1105 19:14:34.751032   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.751040   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:34.751048   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:34.751059   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:34.822279   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:34.822301   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:34.822315   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:34.898607   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:34.898640   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:34.934727   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:34.934754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:34.985935   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:34.985969   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.500117   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:37.512467   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:37.512541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:37.544914   74485 cri.go:89] found id: ""
	I1105 19:14:37.544941   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.544952   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:37.544959   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:37.545028   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:37.581507   74485 cri.go:89] found id: ""
	I1105 19:14:37.581535   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.581545   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:37.581553   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:37.581612   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:37.615546   74485 cri.go:89] found id: ""
	I1105 19:14:37.615576   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.615585   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:37.615592   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:37.615667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:37.648239   74485 cri.go:89] found id: ""
	I1105 19:14:37.648267   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.648276   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:37.648283   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:37.648343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:33.460860   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:35.461416   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:36.224852   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:38.725488   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.347563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:39.347732   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.682861   74485 cri.go:89] found id: ""
	I1105 19:14:37.682891   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.682898   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:37.682904   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:37.682952   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:37.715506   74485 cri.go:89] found id: ""
	I1105 19:14:37.715532   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.715540   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:37.715547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:37.715597   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:37.747973   74485 cri.go:89] found id: ""
	I1105 19:14:37.748003   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.748014   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:37.748022   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:37.748083   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:37.780270   74485 cri.go:89] found id: ""
	I1105 19:14:37.780294   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.780302   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:37.780310   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:37.780321   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.793885   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:37.793914   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:37.860114   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:37.860140   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:37.860154   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:37.941221   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:37.941255   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.980537   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:37.980567   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.532301   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:40.545540   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:40.545599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:40.578642   74485 cri.go:89] found id: ""
	I1105 19:14:40.578687   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.578699   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:40.578707   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:40.578772   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:40.612049   74485 cri.go:89] found id: ""
	I1105 19:14:40.612078   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.612089   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:40.612097   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:40.612159   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:40.644495   74485 cri.go:89] found id: ""
	I1105 19:14:40.644519   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.644527   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:40.644532   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:40.644587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:40.676890   74485 cri.go:89] found id: ""
	I1105 19:14:40.676923   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.676931   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:40.676937   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:40.676984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:40.710095   74485 cri.go:89] found id: ""
	I1105 19:14:40.710125   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.710136   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:40.710144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:40.710200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:40.748323   74485 cri.go:89] found id: ""
	I1105 19:14:40.748353   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.748364   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:40.748372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:40.748501   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:40.781578   74485 cri.go:89] found id: ""
	I1105 19:14:40.781606   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.781618   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:40.781626   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:40.781689   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:40.816010   74485 cri.go:89] found id: ""
	I1105 19:14:40.816048   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.816060   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:40.816071   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:40.816086   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.869836   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:40.869876   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:40.883436   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:40.883471   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:40.946538   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:40.946566   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:40.946585   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:41.023085   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:41.023123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.962163   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.461278   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.726894   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.224939   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:41.847053   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:44.346789   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.566841   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:43.579425   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:43.579498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:43.620500   74485 cri.go:89] found id: ""
	I1105 19:14:43.620526   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.620535   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:43.620541   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:43.620600   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:43.652992   74485 cri.go:89] found id: ""
	I1105 19:14:43.653024   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.653035   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:43.653042   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:43.653105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:43.686945   74485 cri.go:89] found id: ""
	I1105 19:14:43.686991   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.687003   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:43.687010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:43.687124   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:43.720075   74485 cri.go:89] found id: ""
	I1105 19:14:43.720103   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.720114   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:43.720121   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:43.720179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:43.757969   74485 cri.go:89] found id: ""
	I1105 19:14:43.757997   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.758005   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:43.758011   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:43.758071   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:43.790068   74485 cri.go:89] found id: ""
	I1105 19:14:43.790094   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.790103   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:43.790109   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:43.790153   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:43.821696   74485 cri.go:89] found id: ""
	I1105 19:14:43.821722   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.821733   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:43.821741   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:43.821803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:43.855976   74485 cri.go:89] found id: ""
	I1105 19:14:43.856003   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.856011   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:43.856019   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:43.856029   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:43.934375   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:43.934409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:43.972567   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:43.972597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:44.025660   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:44.025696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:44.039229   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:44.039258   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:44.112179   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:46.612815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:46.626070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:46.626145   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:46.659184   74485 cri.go:89] found id: ""
	I1105 19:14:46.659210   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.659218   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:46.659227   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:46.659288   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:46.691887   74485 cri.go:89] found id: ""
	I1105 19:14:46.691917   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.691928   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:46.691934   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:46.692003   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:46.725745   74485 cri.go:89] found id: ""
	I1105 19:14:46.725776   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.725787   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:46.725795   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:46.725847   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:46.761733   74485 cri.go:89] found id: ""
	I1105 19:14:46.761762   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.761773   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:46.761780   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:46.761842   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:46.792926   74485 cri.go:89] found id: ""
	I1105 19:14:46.792955   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.792966   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:46.792974   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:46.793036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:46.824462   74485 cri.go:89] found id: ""
	I1105 19:14:46.824503   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.824512   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:46.824519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:46.824580   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:46.865057   74485 cri.go:89] found id: ""
	I1105 19:14:46.865082   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.865090   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:46.865095   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:46.865146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:46.901357   74485 cri.go:89] found id: ""
	I1105 19:14:46.901385   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.901393   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:46.901401   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:46.901414   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:46.951986   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:46.952021   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:46.966035   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:46.966065   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:47.035163   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:47.035184   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:47.035196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:47.115825   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:47.115860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:42.961397   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.460846   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.724189   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.724319   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:46.847553   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.346787   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.658737   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:49.672088   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:49.672182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:49.708638   74485 cri.go:89] found id: ""
	I1105 19:14:49.708666   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.708674   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:49.708679   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:49.708736   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:49.744485   74485 cri.go:89] found id: ""
	I1105 19:14:49.744513   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.744521   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:49.744526   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:49.744572   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:49.779758   74485 cri.go:89] found id: ""
	I1105 19:14:49.779785   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.779794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:49.779800   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:49.779858   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:49.814216   74485 cri.go:89] found id: ""
	I1105 19:14:49.814248   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.814256   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:49.814262   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:49.814310   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:49.851348   74485 cri.go:89] found id: ""
	I1105 19:14:49.851377   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.851389   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:49.851396   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:49.851455   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:49.883866   74485 cri.go:89] found id: ""
	I1105 19:14:49.883897   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.883906   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:49.883912   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:49.883959   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:49.916944   74485 cri.go:89] found id: ""
	I1105 19:14:49.916967   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.916975   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:49.916980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:49.917039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:49.950405   74485 cri.go:89] found id: ""
	I1105 19:14:49.950437   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.950449   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:49.950459   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:49.950475   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:49.996064   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:49.996102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:50.044865   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:50.044902   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:50.058206   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:50.058236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:50.130371   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:50.130397   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:50.130412   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:49.960550   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.961271   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.724896   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.224128   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.346823   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:53.847102   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.706441   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:52.719571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:52.719655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:52.753850   74485 cri.go:89] found id: ""
	I1105 19:14:52.753880   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.753891   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:52.753899   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:52.753961   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:52.794112   74485 cri.go:89] found id: ""
	I1105 19:14:52.794139   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.794149   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:52.794156   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:52.794218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:52.830151   74485 cri.go:89] found id: ""
	I1105 19:14:52.830178   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.830188   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:52.830195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:52.830258   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:52.864803   74485 cri.go:89] found id: ""
	I1105 19:14:52.864832   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.864853   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:52.864868   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:52.864930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:52.897237   74485 cri.go:89] found id: ""
	I1105 19:14:52.897271   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.897282   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:52.897289   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:52.897351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:52.932236   74485 cri.go:89] found id: ""
	I1105 19:14:52.932262   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.932270   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:52.932275   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:52.932319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:52.965781   74485 cri.go:89] found id: ""
	I1105 19:14:52.965808   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.965817   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:52.965825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:52.965918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:52.999098   74485 cri.go:89] found id: ""
	I1105 19:14:52.999121   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.999129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:52.999137   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:52.999146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:53.051085   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:53.051127   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:53.064690   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:53.064717   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:53.128334   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:53.128358   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:53.128372   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:53.207751   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:53.207791   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:55.745430   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:55.758734   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:55.758821   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:55.791827   74485 cri.go:89] found id: ""
	I1105 19:14:55.791854   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.791862   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:55.791868   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:55.791922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:55.824191   74485 cri.go:89] found id: ""
	I1105 19:14:55.824217   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.824224   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:55.824230   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:55.824278   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:55.858579   74485 cri.go:89] found id: ""
	I1105 19:14:55.858611   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.858619   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:55.858625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:55.858673   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:55.891579   74485 cri.go:89] found id: ""
	I1105 19:14:55.891604   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.891612   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:55.891617   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:55.891663   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:55.924881   74485 cri.go:89] found id: ""
	I1105 19:14:55.924910   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.924920   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:55.924930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:55.924999   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:55.956634   74485 cri.go:89] found id: ""
	I1105 19:14:55.956663   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.956678   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:55.956686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:55.956742   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:55.988770   74485 cri.go:89] found id: ""
	I1105 19:14:55.988803   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.988814   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:55.988821   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:55.988880   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:56.022236   74485 cri.go:89] found id: ""
	I1105 19:14:56.022257   74485 logs.go:282] 0 containers: []
	W1105 19:14:56.022266   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:56.022273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:56.022284   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:56.073035   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:56.073071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:56.086899   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:56.086923   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:56.158219   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:56.158247   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:56.158259   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:56.246621   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:56.246660   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:53.962537   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.461516   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:54.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.725381   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:59.223995   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:55.847591   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.346027   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:00.349718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.791443   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:58.804398   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:58.804476   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:58.837812   74485 cri.go:89] found id: ""
	I1105 19:14:58.837840   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.837856   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:58.837863   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:58.837926   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:58.870154   74485 cri.go:89] found id: ""
	I1105 19:14:58.870186   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.870197   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:58.870204   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:58.870268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:58.906518   74485 cri.go:89] found id: ""
	I1105 19:14:58.906545   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.906553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:58.906563   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:58.906614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:58.939320   74485 cri.go:89] found id: ""
	I1105 19:14:58.939346   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.939357   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:58.939364   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:58.939426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:58.974116   74485 cri.go:89] found id: ""
	I1105 19:14:58.974143   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.974153   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:58.974160   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:58.974221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:59.006820   74485 cri.go:89] found id: ""
	I1105 19:14:59.006854   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.006866   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:59.006873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:59.006933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:59.039691   74485 cri.go:89] found id: ""
	I1105 19:14:59.039723   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.039735   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:59.039742   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:59.039800   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:59.071829   74485 cri.go:89] found id: ""
	I1105 19:14:59.071860   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.071881   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:59.071893   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:59.071906   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:59.124158   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:59.124195   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:59.138563   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:59.138594   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:59.216148   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:59.216174   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:59.216189   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:59.295262   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:59.295297   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:01.833789   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:01.847332   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:01.847408   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:01.882721   74485 cri.go:89] found id: ""
	I1105 19:15:01.882743   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.882750   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:01.882755   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:01.882811   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:01.916457   74485 cri.go:89] found id: ""
	I1105 19:15:01.916479   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.916487   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:01.916502   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:01.916557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:01.950521   74485 cri.go:89] found id: ""
	I1105 19:15:01.950552   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.950564   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:01.950571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:01.950624   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:01.985823   74485 cri.go:89] found id: ""
	I1105 19:15:01.985852   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.985862   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:01.985870   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:01.985918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:02.021689   74485 cri.go:89] found id: ""
	I1105 19:15:02.021720   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.021731   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:02.021739   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:02.021804   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:02.058632   74485 cri.go:89] found id: ""
	I1105 19:15:02.058658   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.058666   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:02.058672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:02.058738   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:02.097916   74485 cri.go:89] found id: ""
	I1105 19:15:02.097947   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.097956   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:02.097961   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:02.098010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:02.131992   74485 cri.go:89] found id: ""
	I1105 19:15:02.132027   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.132038   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:02.132050   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:02.132066   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:02.188605   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:02.188645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:02.201873   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:02.201904   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:02.274767   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:02.274795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:02.274811   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:02.358520   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:02.358559   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:58.962072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.461009   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.224719   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:03.724333   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:02.847593   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.348665   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:04.897693   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:04.913131   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:04.913207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:04.952546   74485 cri.go:89] found id: ""
	I1105 19:15:04.952571   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.952579   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:04.952584   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:04.952643   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:04.987334   74485 cri.go:89] found id: ""
	I1105 19:15:04.987360   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.987368   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:04.987374   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:04.987434   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:05.021873   74485 cri.go:89] found id: ""
	I1105 19:15:05.021906   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.021919   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:05.021926   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:05.021985   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:05.056169   74485 cri.go:89] found id: ""
	I1105 19:15:05.056199   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.056208   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:05.056213   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:05.056265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:05.093090   74485 cri.go:89] found id: ""
	I1105 19:15:05.093117   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.093125   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:05.093130   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:05.093182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:05.127533   74485 cri.go:89] found id: ""
	I1105 19:15:05.127557   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.127564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:05.127576   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:05.127625   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:05.165127   74485 cri.go:89] found id: ""
	I1105 19:15:05.165162   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.165173   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:05.165180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:05.165243   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:05.200526   74485 cri.go:89] found id: ""
	I1105 19:15:05.200556   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.200567   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:05.200578   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:05.200593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:05.247497   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:05.247535   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:05.261963   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:05.261996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:05.336813   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:05.336833   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:05.336844   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:05.412278   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:05.412320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:03.461266   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.463142   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.728530   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:08.227700   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.848748   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:10.346754   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.951085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:07.966125   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:07.966203   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:08.004253   74485 cri.go:89] found id: ""
	I1105 19:15:08.004291   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.004302   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:08.004310   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:08.004373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:08.039539   74485 cri.go:89] found id: ""
	I1105 19:15:08.039562   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.039569   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:08.039575   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:08.039629   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:08.076043   74485 cri.go:89] found id: ""
	I1105 19:15:08.076080   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.076093   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:08.076101   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:08.076157   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:08.110489   74485 cri.go:89] found id: ""
	I1105 19:15:08.110512   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.110519   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:08.110525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:08.110589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:08.147532   74485 cri.go:89] found id: ""
	I1105 19:15:08.147564   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.147574   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:08.147580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:08.147628   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:08.182225   74485 cri.go:89] found id: ""
	I1105 19:15:08.182248   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.182256   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:08.182263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:08.182322   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:08.223488   74485 cri.go:89] found id: ""
	I1105 19:15:08.223524   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.223536   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:08.223544   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:08.223610   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:08.266524   74485 cri.go:89] found id: ""
	I1105 19:15:08.266559   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.266571   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:08.266582   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:08.266597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:08.279036   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:08.279061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:08.346030   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:08.346052   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:08.346064   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:08.428081   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:08.428118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:08.464760   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:08.464789   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.016193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:11.030598   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:11.030681   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:11.066035   74485 cri.go:89] found id: ""
	I1105 19:15:11.066064   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.066073   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:11.066078   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:11.066133   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:11.103906   74485 cri.go:89] found id: ""
	I1105 19:15:11.103937   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.103948   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:11.103955   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:11.104023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:11.142936   74485 cri.go:89] found id: ""
	I1105 19:15:11.143024   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.143034   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:11.143041   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:11.143091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:11.180041   74485 cri.go:89] found id: ""
	I1105 19:15:11.180074   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.180086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:11.180094   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:11.180158   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:11.215661   74485 cri.go:89] found id: ""
	I1105 19:15:11.215693   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.215701   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:11.215707   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:11.215758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:11.252603   74485 cri.go:89] found id: ""
	I1105 19:15:11.252651   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.252663   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:11.252672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:11.252739   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:11.299295   74485 cri.go:89] found id: ""
	I1105 19:15:11.299328   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.299340   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:11.299347   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:11.299402   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:11.355153   74485 cri.go:89] found id: ""
	I1105 19:15:11.355177   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.355185   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:11.355193   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:11.355206   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:11.441076   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:11.441118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:11.480367   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:11.480396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.534646   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:11.534683   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:11.548141   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:11.548170   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:11.616452   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:07.961073   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:09.962118   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.455874   73732 pod_ready.go:82] duration metric: took 4m0.000853559s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:12.455911   73732 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:15:12.455936   73732 pod_ready.go:39] duration metric: took 4m14.55377544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:12.455984   73732 kubeadm.go:597] duration metric: took 4m23.030552871s to restartPrimaryControlPlane
	W1105 19:15:12.456078   73732 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:12.456111   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:10.724247   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.725886   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.846646   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.848074   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.117448   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:14.131224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:14.131297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:14.167811   74485 cri.go:89] found id: ""
	I1105 19:15:14.167843   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.167855   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:14.167862   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:14.167921   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:14.204128   74485 cri.go:89] found id: ""
	I1105 19:15:14.204156   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.204164   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:14.204169   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:14.204232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:14.240687   74485 cri.go:89] found id: ""
	I1105 19:15:14.240716   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.240727   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:14.240735   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:14.240788   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:14.274204   74485 cri.go:89] found id: ""
	I1105 19:15:14.274231   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.274242   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:14.274249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:14.274307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:14.312090   74485 cri.go:89] found id: ""
	I1105 19:15:14.312119   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.312130   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:14.312139   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:14.312200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:14.346824   74485 cri.go:89] found id: ""
	I1105 19:15:14.346857   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.346868   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:14.346875   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:14.346934   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:14.380634   74485 cri.go:89] found id: ""
	I1105 19:15:14.380668   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.380679   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:14.380686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:14.380746   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:14.414402   74485 cri.go:89] found id: ""
	I1105 19:15:14.414432   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.414441   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:14.414449   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:14.414459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:14.464542   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:14.464581   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:14.478195   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:14.478225   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:14.553670   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:14.553693   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:14.553708   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:14.634619   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:14.634659   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.174085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:17.191712   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:17.191771   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:17.234101   74485 cri.go:89] found id: ""
	I1105 19:15:17.234132   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.234143   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:17.234149   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:17.234213   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:17.281548   74485 cri.go:89] found id: ""
	I1105 19:15:17.281574   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.281581   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:17.281588   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:17.281655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:17.337698   74485 cri.go:89] found id: ""
	I1105 19:15:17.337727   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.337735   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:17.337743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:17.337790   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:17.371756   74485 cri.go:89] found id: ""
	I1105 19:15:17.371782   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.371790   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:17.371796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:17.371854   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:17.404989   74485 cri.go:89] found id: ""
	I1105 19:15:17.405015   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.405026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:17.405033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:17.405096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:17.438613   74485 cri.go:89] found id: ""
	I1105 19:15:17.438637   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.438648   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:17.438656   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:17.438717   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:17.470465   74485 cri.go:89] found id: ""
	I1105 19:15:17.470494   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.470502   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:17.470508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:17.470558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:17.503835   74485 cri.go:89] found id: ""
	I1105 19:15:17.503867   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.503876   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:17.503884   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:17.503896   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:17.584110   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:17.584146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.626928   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:17.626955   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:15.223749   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.225434   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.347847   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:19.847047   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.679356   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:17.679397   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:17.693476   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:17.693506   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:17.766809   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.266926   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:20.282219   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:20.282293   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:20.322133   74485 cri.go:89] found id: ""
	I1105 19:15:20.322163   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.322171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:20.322178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:20.322248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:20.357030   74485 cri.go:89] found id: ""
	I1105 19:15:20.357072   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.357084   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:20.357091   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:20.357156   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:20.390523   74485 cri.go:89] found id: ""
	I1105 19:15:20.390549   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.390559   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:20.390567   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:20.390631   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:20.425807   74485 cri.go:89] found id: ""
	I1105 19:15:20.425830   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.425837   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:20.425843   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:20.425903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:20.461984   74485 cri.go:89] found id: ""
	I1105 19:15:20.462014   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.462026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:20.462033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:20.462094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:20.495689   74485 cri.go:89] found id: ""
	I1105 19:15:20.495725   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.495739   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:20.495746   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:20.495799   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:20.528666   74485 cri.go:89] found id: ""
	I1105 19:15:20.528701   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.528713   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:20.528721   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:20.528783   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:20.562566   74485 cri.go:89] found id: ""
	I1105 19:15:20.562596   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.562606   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:20.562614   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:20.562624   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:20.610961   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:20.611000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:20.623898   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:20.623928   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:20.696412   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.696440   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:20.696456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:20.779601   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:20.779642   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:19.725198   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.224019   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.225286   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.347992   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.846718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:23.319846   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:23.333278   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:23.333357   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:23.370771   74485 cri.go:89] found id: ""
	I1105 19:15:23.370796   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.370805   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:23.370810   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:23.370872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:23.405994   74485 cri.go:89] found id: ""
	I1105 19:15:23.406021   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.406029   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:23.406034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:23.406092   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:23.443729   74485 cri.go:89] found id: ""
	I1105 19:15:23.443757   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.443767   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:23.443774   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:23.443836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:23.476162   74485 cri.go:89] found id: ""
	I1105 19:15:23.476188   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.476197   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:23.476205   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:23.476266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:23.509325   74485 cri.go:89] found id: ""
	I1105 19:15:23.509353   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.509363   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:23.509371   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:23.509427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:23.541880   74485 cri.go:89] found id: ""
	I1105 19:15:23.541912   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.541922   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:23.541929   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:23.541993   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:23.574204   74485 cri.go:89] found id: ""
	I1105 19:15:23.574236   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.574248   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:23.574256   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:23.574323   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:23.606865   74485 cri.go:89] found id: ""
	I1105 19:15:23.606896   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.606908   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:23.606918   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:23.606932   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:23.673771   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:23.673792   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:23.673803   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:23.753298   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:23.753335   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:23.792273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:23.792304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:23.843072   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:23.843110   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.356859   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:26.369417   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:26.369488   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:26.403611   74485 cri.go:89] found id: ""
	I1105 19:15:26.403639   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.403647   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:26.403653   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:26.403725   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:26.439891   74485 cri.go:89] found id: ""
	I1105 19:15:26.439924   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.439936   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:26.439943   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:26.439991   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:26.473502   74485 cri.go:89] found id: ""
	I1105 19:15:26.473542   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.473554   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:26.473561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:26.473640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:26.505666   74485 cri.go:89] found id: ""
	I1105 19:15:26.505695   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.505703   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:26.505710   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:26.505769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:26.539781   74485 cri.go:89] found id: ""
	I1105 19:15:26.539815   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.539827   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:26.539835   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:26.539911   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:26.574673   74485 cri.go:89] found id: ""
	I1105 19:15:26.574712   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.574721   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:26.574727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:26.574773   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:26.608410   74485 cri.go:89] found id: ""
	I1105 19:15:26.608433   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.608441   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:26.608446   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:26.608494   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:26.644036   74485 cri.go:89] found id: ""
	I1105 19:15:26.644065   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.644076   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:26.644087   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:26.644098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.718901   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:26.718937   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:26.758920   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:26.758953   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:26.811241   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:26.811277   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.824931   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:26.824961   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:26.891799   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:26.725062   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:27.724594   74141 pod_ready.go:82] duration metric: took 4m0.006622979s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:27.724627   74141 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1105 19:15:27.724644   74141 pod_ready.go:39] duration metric: took 4m0.807889519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:27.724663   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:15:27.724711   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:27.724769   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:27.771870   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:27.771897   74141 cri.go:89] found id: ""
	I1105 19:15:27.771906   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:27.771966   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.776484   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:27.776553   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:27.823529   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:27.823564   74141 cri.go:89] found id: ""
	I1105 19:15:27.823576   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:27.823638   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.828600   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:27.828685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:27.878206   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:27.878242   74141 cri.go:89] found id: ""
	I1105 19:15:27.878254   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:27.878317   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.882545   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:27.882640   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:27.920102   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:27.920127   74141 cri.go:89] found id: ""
	I1105 19:15:27.920137   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:27.920189   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.924516   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:27.924593   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:27.969493   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:27.969523   74141 cri.go:89] found id: ""
	I1105 19:15:27.969534   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:27.969589   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.973637   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:27.973724   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:28.014369   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.014396   74141 cri.go:89] found id: ""
	I1105 19:15:28.014405   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:28.014463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.019040   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:28.019112   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:28.056411   74141 cri.go:89] found id: ""
	I1105 19:15:28.056438   74141 logs.go:282] 0 containers: []
	W1105 19:15:28.056446   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:28.056452   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:28.056502   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:28.099541   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.099562   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.099566   74141 cri.go:89] found id: ""
	I1105 19:15:28.099573   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:28.099628   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.104144   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.108443   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:28.108465   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.153262   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:28.153302   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.197210   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:28.197237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:28.242915   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:28.242944   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:28.257468   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:28.257497   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:28.299795   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:28.299825   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:28.333983   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:28.334015   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:28.369174   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:28.369202   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:28.405838   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:28.405869   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:28.477842   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:28.477880   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:28.595832   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:28.595865   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:28.639146   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:28.639179   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.689519   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:28.689554   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.846977   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:28.847878   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:29.392417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:29.405249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:29.405331   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:29.437397   74485 cri.go:89] found id: ""
	I1105 19:15:29.437432   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.437443   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:29.437450   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:29.437504   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:29.469908   74485 cri.go:89] found id: ""
	I1105 19:15:29.469938   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.469946   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:29.469951   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:29.470008   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:29.502302   74485 cri.go:89] found id: ""
	I1105 19:15:29.502331   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.502339   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:29.502345   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:29.502391   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:29.534285   74485 cri.go:89] found id: ""
	I1105 19:15:29.534309   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.534317   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:29.534322   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:29.534373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:29.571918   74485 cri.go:89] found id: ""
	I1105 19:15:29.571962   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.571973   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:29.571983   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:29.572042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:29.605324   74485 cri.go:89] found id: ""
	I1105 19:15:29.605354   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.605365   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:29.605373   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:29.605435   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:29.640181   74485 cri.go:89] found id: ""
	I1105 19:15:29.640210   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.640218   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:29.640224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:29.640273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:29.671121   74485 cri.go:89] found id: ""
	I1105 19:15:29.671147   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.671155   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:29.671164   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:29.671174   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:29.750821   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:29.750856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:29.787452   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:29.787479   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:29.840413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:29.840459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:29.855540   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:29.855580   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:29.925849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:32.426016   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:32.438759   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:32.438824   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:32.476376   74485 cri.go:89] found id: ""
	I1105 19:15:32.476406   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.476416   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:32.476423   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:32.476490   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:32.512328   74485 cri.go:89] found id: ""
	I1105 19:15:32.512352   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.512360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:32.512365   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:32.512414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:32.546803   74485 cri.go:89] found id: ""
	I1105 19:15:32.546833   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.546844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:32.546851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:32.546925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:32.585904   74485 cri.go:89] found id: ""
	I1105 19:15:32.585934   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.585946   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:32.585953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:32.586014   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:32.620976   74485 cri.go:89] found id: ""
	I1105 19:15:32.621005   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.621012   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:32.621018   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:32.621082   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.668028   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:31.684024   74141 api_server.go:72] duration metric: took 4m12.496021782s to wait for apiserver process to appear ...
	I1105 19:15:31.684060   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:15:31.684105   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:31.684163   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:31.719462   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:31.719496   74141 cri.go:89] found id: ""
	I1105 19:15:31.719506   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:31.719559   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.723632   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:31.723702   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:31.761976   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:31.762001   74141 cri.go:89] found id: ""
	I1105 19:15:31.762010   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:31.762067   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.766066   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:31.766137   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:31.799673   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:31.799694   74141 cri.go:89] found id: ""
	I1105 19:15:31.799701   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:31.799753   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.803632   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:31.803714   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:31.841782   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:31.841808   74141 cri.go:89] found id: ""
	I1105 19:15:31.841818   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:31.841873   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.850409   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:31.850471   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:31.891932   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:31.891959   74141 cri.go:89] found id: ""
	I1105 19:15:31.891969   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:31.892026   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.896065   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:31.896125   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.932759   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:31.932781   74141 cri.go:89] found id: ""
	I1105 19:15:31.932788   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:31.932831   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.936611   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:31.936685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:31.971296   74141 cri.go:89] found id: ""
	I1105 19:15:31.971328   74141 logs.go:282] 0 containers: []
	W1105 19:15:31.971339   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:31.971348   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:31.971410   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:32.006153   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:32.006173   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.006177   74141 cri.go:89] found id: ""
	I1105 19:15:32.006184   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:32.006226   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.010159   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.013807   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.013831   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.084222   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:32.084273   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:32.127875   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:32.127928   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:32.173008   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:32.173041   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:32.235366   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.235402   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.714822   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:32.714861   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.750733   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.750761   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.796233   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.796264   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.809269   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.809296   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:32.931162   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:32.931196   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:32.968551   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:32.968578   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:33.008115   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:33.008152   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:33.046201   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:33.046237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:31.346652   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:33.347118   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:32.658958   74485 cri.go:89] found id: ""
	I1105 19:15:32.659006   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.659018   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:32.659026   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:32.659091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:32.694317   74485 cri.go:89] found id: ""
	I1105 19:15:32.694341   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.694349   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:32.694354   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:32.694403   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:32.728277   74485 cri.go:89] found id: ""
	I1105 19:15:32.728314   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.728327   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:32.728338   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.728352   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.815579   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.815615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.856776   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.856807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.909477   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.909518   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.923789   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.923817   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:32.997898   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:35.498040   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:35.511537   74485 kubeadm.go:597] duration metric: took 4m4.46832509s to restartPrimaryControlPlane
	W1105 19:15:35.511612   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:35.511644   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:35.586678   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:15:35.591512   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:15:35.592489   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:15:35.592507   74141 api_server.go:131] duration metric: took 3.908440367s to wait for apiserver health ...
	I1105 19:15:35.592514   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:15:35.592538   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:35.592589   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:35.636389   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.636408   74141 cri.go:89] found id: ""
	I1105 19:15:35.636416   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:35.636463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.640778   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:35.640839   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:35.676793   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:35.676818   74141 cri.go:89] found id: ""
	I1105 19:15:35.676828   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:35.676890   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.681596   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:35.681669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:35.721728   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:35.721754   74141 cri.go:89] found id: ""
	I1105 19:15:35.721763   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:35.721808   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.725619   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:35.725677   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:35.765348   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:35.765377   74141 cri.go:89] found id: ""
	I1105 19:15:35.765386   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:35.765439   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.769594   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:35.769669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:35.809427   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:35.809452   74141 cri.go:89] found id: ""
	I1105 19:15:35.809460   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:35.809505   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.814317   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:35.814376   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:35.853861   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:35.853882   74141 cri.go:89] found id: ""
	I1105 19:15:35.853890   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:35.853934   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.857734   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:35.857787   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:35.897791   74141 cri.go:89] found id: ""
	I1105 19:15:35.897816   74141 logs.go:282] 0 containers: []
	W1105 19:15:35.897824   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:35.897830   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:35.897887   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:35.940906   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:35.940940   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:35.940946   74141 cri.go:89] found id: ""
	I1105 19:15:35.940954   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:35.941006   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.945200   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.948860   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:35.948884   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.992660   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:35.992690   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:36.033586   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:36.033617   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:36.066599   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:36.066643   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:36.104895   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:36.104932   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:36.489747   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:36.489781   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:36.531923   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:36.531952   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:36.598718   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:36.598758   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:36.612969   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:36.612998   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:36.718535   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:36.718568   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:36.755636   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:36.755677   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:36.815561   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:36.815640   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:36.850878   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:36.850904   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:39.390699   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:15:39.390733   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.390738   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.390743   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.390747   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.390750   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.390753   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.390760   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.390764   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.390771   74141 system_pods.go:74] duration metric: took 3.798251189s to wait for pod list to return data ...
	I1105 19:15:39.390777   74141 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:15:39.393894   74141 default_sa.go:45] found service account: "default"
	I1105 19:15:39.393914   74141 default_sa.go:55] duration metric: took 3.132788ms for default service account to be created ...
	I1105 19:15:39.393929   74141 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:15:39.398455   74141 system_pods.go:86] 8 kube-system pods found
	I1105 19:15:39.398480   74141 system_pods.go:89] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.398485   74141 system_pods.go:89] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.398490   74141 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.398494   74141 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.398497   74141 system_pods.go:89] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.398501   74141 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.398508   74141 system_pods.go:89] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.398512   74141 system_pods.go:89] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.398520   74141 system_pods.go:126] duration metric: took 4.586494ms to wait for k8s-apps to be running ...
	I1105 19:15:39.398529   74141 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:15:39.398569   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.413878   74141 system_svc.go:56] duration metric: took 15.340417ms WaitForService to wait for kubelet
	I1105 19:15:39.413908   74141 kubeadm.go:582] duration metric: took 4m20.225910976s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:15:39.413936   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:15:39.416851   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:15:39.416870   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:15:39.416880   74141 node_conditions.go:105] duration metric: took 2.939584ms to run NodePressure ...
	I1105 19:15:39.416891   74141 start.go:241] waiting for startup goroutines ...
	I1105 19:15:39.416899   74141 start.go:246] waiting for cluster config update ...
	I1105 19:15:39.416911   74141 start.go:255] writing updated cluster config ...
	I1105 19:15:39.417211   74141 ssh_runner.go:195] Run: rm -f paused
	I1105 19:15:39.463773   74141 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:15:39.465688   74141 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-608095" cluster and "default" namespace by default
	I1105 19:15:39.702249   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.19058336s)
	I1105 19:15:39.702314   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.717966   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:39.728114   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:39.740451   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:39.740476   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:39.740519   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:39.751089   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:39.751150   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:39.761832   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:39.771841   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:39.771904   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:39.782332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.792379   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:39.792438   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.801625   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:39.811691   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:39.811740   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:39.821162   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:39.891377   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:15:39.891443   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:40.034176   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:40.034337   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:40.034476   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:15:40.211588   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:35.847491   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:38.346965   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.348252   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.213724   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:40.213838   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:40.213939   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:40.214048   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:40.214172   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:40.214266   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:40.214375   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:40.214478   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:40.214567   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:40.214687   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:40.214819   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:40.214884   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:40.214980   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:40.358606   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:40.632263   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:40.766570   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:40.885914   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:40.902379   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:40.903647   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:40.903716   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:41.040274   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:41.042093   74485 out.go:235]   - Booting up control plane ...
	I1105 19:15:41.042222   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:41.048448   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:41.058445   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:41.059466   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:41.062648   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:15:38.649673   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193536212s)
	I1105 19:15:38.649753   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:38.665214   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:38.674520   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:38.684078   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:38.684102   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:38.684151   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:38.693169   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:38.693239   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:38.702305   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:38.710796   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:38.710868   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:38.719716   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.728090   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:38.728143   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.737219   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:38.745625   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:38.745692   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:38.754684   73732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:38.914343   73732 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:15:42.847011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:44.851431   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:47.368221   73732 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:15:47.368296   73732 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:47.368405   73732 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:47.368552   73732 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:47.368686   73732 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:15:47.368787   73732 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:47.370333   73732 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:47.370429   73732 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:47.370529   73732 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:47.370650   73732 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:47.370763   73732 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:47.370900   73732 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:47.371009   73732 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:47.371110   73732 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:47.371198   73732 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:47.371312   73732 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:47.371431   73732 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:47.371494   73732 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:47.371573   73732 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:47.371656   73732 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:47.371725   73732 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:15:47.371797   73732 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:47.371893   73732 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:47.371976   73732 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:47.372074   73732 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:47.372160   73732 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:47.374386   73732 out.go:235]   - Booting up control plane ...
	I1105 19:15:47.374503   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:47.374622   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:47.374707   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:47.374838   73732 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:47.374950   73732 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:47.375046   73732 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:47.375226   73732 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:15:47.375367   73732 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:15:47.375450   73732 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.124171ms
	I1105 19:15:47.375549   73732 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:15:47.375647   73732 kubeadm.go:310] [api-check] The API server is healthy after 5.001431223s
	I1105 19:15:47.375804   73732 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:15:47.375968   73732 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:15:47.376055   73732 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:15:47.376321   73732 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-271881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:15:47.376412   73732 kubeadm.go:310] [bootstrap-token] Using token: 2xak8n.owgv6oncwawjarav
	I1105 19:15:47.377766   73732 out.go:235]   - Configuring RBAC rules ...
	I1105 19:15:47.377911   73732 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:15:47.378024   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:15:47.378138   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:15:47.378243   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:15:47.378337   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:15:47.378408   73732 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:15:47.378502   73732 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:15:47.378541   73732 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:15:47.378580   73732 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:15:47.378587   73732 kubeadm.go:310] 
	I1105 19:15:47.378635   73732 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:15:47.378645   73732 kubeadm.go:310] 
	I1105 19:15:47.378711   73732 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:15:47.378718   73732 kubeadm.go:310] 
	I1105 19:15:47.378760   73732 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:15:47.378813   73732 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:15:47.378856   73732 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:15:47.378860   73732 kubeadm.go:310] 
	I1105 19:15:47.378910   73732 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:15:47.378913   73732 kubeadm.go:310] 
	I1105 19:15:47.378955   73732 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:15:47.378959   73732 kubeadm.go:310] 
	I1105 19:15:47.379030   73732 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:15:47.379114   73732 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:15:47.379195   73732 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:15:47.379203   73732 kubeadm.go:310] 
	I1105 19:15:47.379320   73732 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:15:47.379427   73732 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:15:47.379442   73732 kubeadm.go:310] 
	I1105 19:15:47.379559   73732 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.379718   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:15:47.379762   73732 kubeadm.go:310] 	--control-plane 
	I1105 19:15:47.379770   73732 kubeadm.go:310] 
	I1105 19:15:47.379844   73732 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:15:47.379851   73732 kubeadm.go:310] 
	I1105 19:15:47.379977   73732 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.380150   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:15:47.380167   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:15:47.380174   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:15:47.381714   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:15:47.382944   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:15:47.394080   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:15:47.411715   73732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:15:47.411773   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.411821   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-271881 minikube.k8s.io/updated_at=2024_11_05T19_15_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=embed-certs-271881 minikube.k8s.io/primary=true
	I1105 19:15:47.439084   73732 ops.go:34] apiserver oom_adj: -16
	I1105 19:15:47.601691   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.348094   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:49.847296   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:48.102103   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:48.602767   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.101780   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.601826   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.101976   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.602763   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.102779   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.601930   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.102574   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.241636   73732 kubeadm.go:1113] duration metric: took 4.829922813s to wait for elevateKubeSystemPrivileges
	I1105 19:15:52.241680   73732 kubeadm.go:394] duration metric: took 5m2.866246993s to StartCluster
	I1105 19:15:52.241704   73732 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.241801   73732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:15:52.244409   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.244716   73732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:15:52.244789   73732 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:15:52.244893   73732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-271881"
	I1105 19:15:52.244914   73732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-271881"
	I1105 19:15:52.244911   73732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-271881"
	I1105 19:15:52.244933   73732 addons.go:69] Setting metrics-server=true in profile "embed-certs-271881"
	I1105 19:15:52.244941   73732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-271881"
	I1105 19:15:52.244954   73732 addons.go:234] Setting addon metrics-server=true in "embed-certs-271881"
	W1105 19:15:52.244965   73732 addons.go:243] addon metrics-server should already be in state true
	I1105 19:15:52.244998   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1105 19:15:52.244925   73732 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:15:52.245001   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245065   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245404   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245422   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245436   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245455   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245464   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245543   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.246341   73732 out.go:177] * Verifying Kubernetes components...
	I1105 19:15:52.247801   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:15:52.261802   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I1105 19:15:52.262325   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.262955   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.263159   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.263591   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.264367   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.264413   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.265696   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I1105 19:15:52.265941   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I1105 19:15:52.266161   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266322   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266776   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266782   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266800   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.266803   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.267185   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267224   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267353   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.267804   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.267846   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.271094   73732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-271881"
	W1105 19:15:52.271117   73732 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:15:52.271147   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.271509   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.271554   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.284180   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I1105 19:15:52.284456   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1105 19:15:52.284703   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.284925   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.285248   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285261   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285355   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285363   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285578   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285727   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285766   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.285862   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.287834   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.288259   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.290341   73732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:15:52.290346   73732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:15:52.290695   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I1105 19:15:52.291040   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.291464   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.291479   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.291776   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.291974   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:15:52.291994   73732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:15:52.292015   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292054   73732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.292067   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:15:52.292079   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292355   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.292400   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.295296   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295650   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.295675   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295701   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295797   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.295969   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296102   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296247   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.296272   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.296305   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.296582   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.296714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296848   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296947   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.314049   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I1105 19:15:52.314561   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.315148   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.315168   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.315884   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.316080   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.318146   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.318465   73732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.318478   73732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:15:52.318496   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.321312   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321825   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.321850   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321885   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.322095   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.322238   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.322397   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.453762   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:15:52.483722   73732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493492   73732 node_ready.go:49] node "embed-certs-271881" has status "Ready":"True"
	I1105 19:15:52.493519   73732 node_ready.go:38] duration metric: took 9.757528ms for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493530   73732 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:52.508208   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:15:52.577925   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.589366   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:15:52.589389   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:15:52.612570   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:15:52.612593   73732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:15:52.645851   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.647686   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:52.647713   73732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:15:52.668865   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:53.246894   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246918   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.246923   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246950   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247230   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247277   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247305   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247323   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247338   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247349   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247331   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247368   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247378   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247710   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247739   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247746   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247779   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247800   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247811   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.269143   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.269165   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.269465   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.269479   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.269483   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.494717   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.494741   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495080   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495100   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495114   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.495123   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495348   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.495394   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495414   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495427   73732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-271881"
	I1105 19:15:53.497126   73732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:15:52.347616   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:54.352434   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:53.498891   73732 addons.go:510] duration metric: took 1.254108253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:15:54.518219   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:57.015647   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:56.846198   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:58.847684   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:59.514759   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:01.514818   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:02.515124   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.515148   73732 pod_ready.go:82] duration metric: took 10.006914802s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.515158   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519864   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.519889   73732 pod_ready.go:82] duration metric: took 4.723101ms for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519900   73732 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524948   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.524970   73732 pod_ready.go:82] duration metric: took 5.063029ms for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524979   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529710   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.529739   73732 pod_ready.go:82] duration metric: took 4.753888ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529750   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534282   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.534301   73732 pod_ready.go:82] duration metric: took 4.544677ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534309   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912364   73732 pod_ready.go:93] pod "kube-proxy-nfxcj" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.912387   73732 pod_ready.go:82] duration metric: took 378.071939ms for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912397   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311793   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:03.311816   73732 pod_ready.go:82] duration metric: took 399.412502ms for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311822   73732 pod_ready.go:39] duration metric: took 10.818282425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:03.311836   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:16:03.311883   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:16:03.327913   73732 api_server.go:72] duration metric: took 11.083157176s to wait for apiserver process to appear ...
	I1105 19:16:03.327947   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:16:03.327968   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:16:03.334499   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:16:03.335530   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:16:03.335550   73732 api_server.go:131] duration metric: took 7.596072ms to wait for apiserver health ...
	I1105 19:16:03.335558   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:16:03.514782   73732 system_pods.go:59] 9 kube-system pods found
	I1105 19:16:03.514813   73732 system_pods.go:61] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.514820   73732 system_pods.go:61] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.514825   73732 system_pods.go:61] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.514830   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.514835   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.514840   73732 system_pods.go:61] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.514844   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.514854   73732 system_pods.go:61] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.514859   73732 system_pods.go:61] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.514868   73732 system_pods.go:74] duration metric: took 179.304519ms to wait for pod list to return data ...
	I1105 19:16:03.514877   73732 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:16:03.712690   73732 default_sa.go:45] found service account: "default"
	I1105 19:16:03.712719   73732 default_sa.go:55] duration metric: took 197.831177ms for default service account to be created ...
	I1105 19:16:03.712731   73732 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:16:03.916858   73732 system_pods.go:86] 9 kube-system pods found
	I1105 19:16:03.916893   73732 system_pods.go:89] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.916902   73732 system_pods.go:89] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.916908   73732 system_pods.go:89] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.916913   73732 system_pods.go:89] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.916918   73732 system_pods.go:89] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.916921   73732 system_pods.go:89] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.916924   73732 system_pods.go:89] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.916934   73732 system_pods.go:89] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.916941   73732 system_pods.go:89] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.916953   73732 system_pods.go:126] duration metric: took 204.215711ms to wait for k8s-apps to be running ...
	I1105 19:16:03.916963   73732 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:16:03.917019   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:03.931369   73732 system_svc.go:56] duration metric: took 14.397556ms WaitForService to wait for kubelet
	I1105 19:16:03.931407   73732 kubeadm.go:582] duration metric: took 11.686653516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:16:03.931454   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:16:04.111904   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:16:04.111928   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:16:04.111937   73732 node_conditions.go:105] duration metric: took 180.475073ms to run NodePressure ...
	I1105 19:16:04.111947   73732 start.go:241] waiting for startup goroutines ...
	I1105 19:16:04.111953   73732 start.go:246] waiting for cluster config update ...
	I1105 19:16:04.111962   73732 start.go:255] writing updated cluster config ...
	I1105 19:16:04.112197   73732 ssh_runner.go:195] Run: rm -f paused
	I1105 19:16:04.158775   73732 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:16:04.160801   73732 out.go:177] * Done! kubectl is now configured to use "embed-certs-271881" cluster and "default" namespace by default
	I1105 19:16:01.346039   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:03.346369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:05.846866   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:08.346383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:10.346570   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:12.347171   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:14.846335   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.840591   73496 pod_ready.go:82] duration metric: took 4m0.000143963s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	E1105 19:16:17.840620   73496 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:16:17.840649   73496 pod_ready.go:39] duration metric: took 4m11.022533189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:17.840682   73496 kubeadm.go:597] duration metric: took 4m18.432062793s to restartPrimaryControlPlane
	W1105 19:16:17.840732   73496 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:16:17.840755   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:16:21.064069   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:16:21.064607   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:21.064798   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:26.065202   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:26.065410   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:36.065932   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:36.066151   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:43.960239   73496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.119460606s)
	I1105 19:16:43.960324   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:43.986199   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:16:43.999287   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:16:44.013653   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:16:44.013675   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:16:44.013718   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:16:44.026073   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:16:44.026140   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:16:44.038723   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:16:44.050880   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:16:44.050957   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:16:44.061696   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.071739   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:16:44.072301   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.084030   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:16:44.093217   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:16:44.093275   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:16:44.102494   73496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:16:44.267623   73496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:16:52.534375   73496 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:16:52.534458   73496 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:16:52.534569   73496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:16:52.534704   73496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:16:52.534834   73496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:16:52.534930   73496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:16:52.536666   73496 out.go:235]   - Generating certificates and keys ...
	I1105 19:16:52.536759   73496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:16:52.536836   73496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:16:52.536911   73496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:16:52.536963   73496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:16:52.537060   73496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:16:52.537145   73496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:16:52.537232   73496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:16:52.537286   73496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:16:52.537361   73496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:16:52.537455   73496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:16:52.537500   73496 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:16:52.537578   73496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:16:52.537648   73496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:16:52.537725   73496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:16:52.537797   73496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:16:52.537905   73496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:16:52.537988   73496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:16:52.538075   73496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:16:52.538136   73496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:16:52.539588   73496 out.go:235]   - Booting up control plane ...
	I1105 19:16:52.539669   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:16:52.539743   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:16:52.539800   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:16:52.539885   73496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:16:52.539987   73496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:16:52.540057   73496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:16:52.540206   73496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:16:52.540300   73496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:16:52.540367   73496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733469ms
	I1105 19:16:52.540447   73496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:16:52.540528   73496 kubeadm.go:310] [api-check] The API server is healthy after 5.001962829s
	I1105 19:16:52.540651   73496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:16:52.540806   73496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:16:52.540899   73496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:16:52.541094   73496 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-459223 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:16:52.541164   73496 kubeadm.go:310] [bootstrap-token] Using token: f0bzzt.jihwqjda853aoxrb
	I1105 19:16:52.543528   73496 out.go:235]   - Configuring RBAC rules ...
	I1105 19:16:52.543658   73496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:16:52.543777   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:16:52.543942   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:16:52.544072   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:16:52.544222   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:16:52.544327   73496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:16:52.544453   73496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:16:52.544493   73496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:16:52.544536   73496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:16:52.544542   73496 kubeadm.go:310] 
	I1105 19:16:52.544593   73496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:16:52.544599   73496 kubeadm.go:310] 
	I1105 19:16:52.544687   73496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:16:52.544701   73496 kubeadm.go:310] 
	I1105 19:16:52.544739   73496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:16:52.544795   73496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:16:52.544855   73496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:16:52.544881   73496 kubeadm.go:310] 
	I1105 19:16:52.544958   73496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:16:52.544971   73496 kubeadm.go:310] 
	I1105 19:16:52.545039   73496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:16:52.545049   73496 kubeadm.go:310] 
	I1105 19:16:52.545111   73496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:16:52.545193   73496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:16:52.545251   73496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:16:52.545257   73496 kubeadm.go:310] 
	I1105 19:16:52.545324   73496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:16:52.545403   73496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:16:52.545409   73496 kubeadm.go:310] 
	I1105 19:16:52.545480   73496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.545605   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:16:52.545638   73496 kubeadm.go:310] 	--control-plane 
	I1105 19:16:52.545648   73496 kubeadm.go:310] 
	I1105 19:16:52.545779   73496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:16:52.545794   73496 kubeadm.go:310] 
	I1105 19:16:52.545903   73496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.546059   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:16:52.546074   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:16:52.546083   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:16:52.548357   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:16:52.549732   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:16:52.560406   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:16:52.577268   73496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:16:52.577334   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:52.577373   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-459223 minikube.k8s.io/updated_at=2024_11_05T19_16_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=no-preload-459223 minikube.k8s.io/primary=true
	I1105 19:16:52.776299   73496 ops.go:34] apiserver oom_adj: -16
	I1105 19:16:52.776456   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.276618   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.777474   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.276726   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.777004   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.276725   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.777410   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.893941   73496 kubeadm.go:1113] duration metric: took 3.316665512s to wait for elevateKubeSystemPrivileges
	I1105 19:16:55.893984   73496 kubeadm.go:394] duration metric: took 4m56.532038314s to StartCluster
	I1105 19:16:55.894007   73496 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.894104   73496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:16:55.896620   73496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.896934   73496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:16:55.897120   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:16:55.897056   73496 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:16:55.897166   73496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-459223"
	I1105 19:16:55.897176   73496 addons.go:69] Setting default-storageclass=true in profile "no-preload-459223"
	I1105 19:16:55.897186   73496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-459223"
	I1105 19:16:55.897193   73496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-459223"
	I1105 19:16:55.897211   73496 addons.go:69] Setting metrics-server=true in profile "no-preload-459223"
	I1105 19:16:55.897231   73496 addons.go:234] Setting addon metrics-server=true in "no-preload-459223"
	W1105 19:16:55.897243   73496 addons.go:243] addon metrics-server should already be in state true
	I1105 19:16:55.897271   73496 host.go:66] Checking if "no-preload-459223" exists ...
	W1105 19:16:55.897195   73496 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:16:55.897323   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.897599   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897642   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897705   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897754   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897711   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897811   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.898341   73496 out.go:177] * Verifying Kubernetes components...
	I1105 19:16:55.899778   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:16:55.914218   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1105 19:16:55.914305   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1105 19:16:55.914726   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.914837   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.915283   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915305   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915391   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915418   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915642   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915757   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915804   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.916323   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.916367   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.916858   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1105 19:16:55.917296   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.917805   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.917832   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.918156   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.918678   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.918720   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.919527   73496 addons.go:234] Setting addon default-storageclass=true in "no-preload-459223"
	W1105 19:16:55.919549   73496 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:16:55.919576   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.919954   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.919996   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.932547   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I1105 19:16:55.933026   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.933588   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.933601   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.933918   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.934153   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.936094   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.937415   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I1105 19:16:55.937800   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.937812   73496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:16:55.938312   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.938324   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.938420   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I1105 19:16:55.938661   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.938816   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.938867   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:16:55.938894   73496 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:16:55.938918   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.939014   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.939350   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.939362   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.939855   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.940281   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.940310   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.940959   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.942661   73496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:16:55.942797   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943216   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.943392   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943422   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.943588   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.943842   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.944078   73496 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:55.944083   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.944096   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:16:55.944114   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.947574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.947767   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.947789   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.948125   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.948249   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.948343   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.948424   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.987691   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I1105 19:16:55.988131   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.988714   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.988739   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.989127   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.989325   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.991207   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.991453   73496 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:55.991472   73496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:16:55.991492   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.994362   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994800   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.994846   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994938   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.995145   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.995315   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.996088   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:56.109142   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:16:56.126382   73496 node_ready.go:35] waiting up to 6m0s for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138050   73496 node_ready.go:49] node "no-preload-459223" has status "Ready":"True"
	I1105 19:16:56.138076   73496 node_ready.go:38] duration metric: took 11.661265ms for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138087   73496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:56.143325   73496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:56.230205   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:16:56.230228   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:16:56.232603   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:56.259360   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:16:56.259388   73496 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:16:56.268694   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:56.321334   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:56.321364   73496 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:16:56.387409   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:57.010417   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010441   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010496   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010522   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010748   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.010795   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010804   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010812   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010818   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010817   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010830   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010838   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010843   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.011143   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011147   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011205   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011221   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.011209   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011298   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074127   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.074148   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.074476   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.074543   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074508   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.135875   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.135898   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136259   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136280   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136278   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136291   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.136308   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136703   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136747   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136757   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136767   73496 addons.go:475] Verifying addon metrics-server=true in "no-preload-459223"
	I1105 19:16:57.138699   73496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:16:56.066834   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:56.067140   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:57.140755   73496 addons.go:510] duration metric: took 1.243699533s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:16:58.154376   73496 pod_ready.go:103] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:17:00.149838   73496 pod_ready.go:93] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:00.149864   73496 pod_ready.go:82] duration metric: took 4.006514005s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:00.149876   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156460   73496 pod_ready.go:93] pod "kube-apiserver-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.156486   73496 pod_ready.go:82] duration metric: took 1.006602068s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156499   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160598   73496 pod_ready.go:93] pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.160618   73496 pod_ready.go:82] duration metric: took 4.110322ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160631   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164461   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.164482   73496 pod_ready.go:82] duration metric: took 3.842329ms for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164492   73496 pod_ready.go:39] duration metric: took 5.026393011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:17:01.164509   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:17:01.164566   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:17:01.183307   73496 api_server.go:72] duration metric: took 5.286331754s to wait for apiserver process to appear ...
	I1105 19:17:01.183338   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:17:01.183357   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:17:01.189083   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:17:01.190439   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:17:01.190469   73496 api_server.go:131] duration metric: took 7.123058ms to wait for apiserver health ...
	I1105 19:17:01.190479   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:17:01.198820   73496 system_pods.go:59] 9 kube-system pods found
	I1105 19:17:01.198854   73496 system_pods.go:61] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198862   73496 system_pods.go:61] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198869   73496 system_pods.go:61] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.198873   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.198879   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.198883   73496 system_pods.go:61] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.198887   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.198893   73496 system_pods.go:61] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.198896   73496 system_pods.go:61] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.198903   73496 system_pods.go:74] duration metric: took 8.418414ms to wait for pod list to return data ...
	I1105 19:17:01.198913   73496 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:17:01.202229   73496 default_sa.go:45] found service account: "default"
	I1105 19:17:01.202251   73496 default_sa.go:55] duration metric: took 3.332652ms for default service account to be created ...
	I1105 19:17:01.202260   73496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:17:01.208774   73496 system_pods.go:86] 9 kube-system pods found
	I1105 19:17:01.208803   73496 system_pods.go:89] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208811   73496 system_pods.go:89] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208817   73496 system_pods.go:89] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.208821   73496 system_pods.go:89] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.208825   73496 system_pods.go:89] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.208828   73496 system_pods.go:89] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.208833   73496 system_pods.go:89] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.208838   73496 system_pods.go:89] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.208842   73496 system_pods.go:89] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.208848   73496 system_pods.go:126] duration metric: took 6.584071ms to wait for k8s-apps to be running ...
	I1105 19:17:01.208856   73496 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:17:01.208898   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:01.225005   73496 system_svc.go:56] duration metric: took 16.138051ms WaitForService to wait for kubelet
	I1105 19:17:01.225038   73496 kubeadm.go:582] duration metric: took 5.328067688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:17:01.225062   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:17:01.347771   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:17:01.347799   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:17:01.347813   73496 node_conditions.go:105] duration metric: took 122.746343ms to run NodePressure ...
	I1105 19:17:01.347826   73496 start.go:241] waiting for startup goroutines ...
	I1105 19:17:01.347834   73496 start.go:246] waiting for cluster config update ...
	I1105 19:17:01.347846   73496 start.go:255] writing updated cluster config ...
	I1105 19:17:01.348126   73496 ssh_runner.go:195] Run: rm -f paused
	I1105 19:17:01.396396   73496 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:17:01.398528   73496 out.go:177] * Done! kubectl is now configured to use "no-preload-459223" cluster and "default" namespace by default
	I1105 19:17:36.069129   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:17:36.069396   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:17:36.069426   74485 kubeadm.go:310] 
	I1105 19:17:36.069489   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:17:36.069572   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:17:36.069591   74485 kubeadm.go:310] 
	I1105 19:17:36.069638   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:17:36.069699   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:17:36.069843   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:17:36.069852   74485 kubeadm.go:310] 
	I1105 19:17:36.069967   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:17:36.070017   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:17:36.070067   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:17:36.070074   74485 kubeadm.go:310] 
	I1105 19:17:36.070216   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:17:36.070328   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:17:36.070345   74485 kubeadm.go:310] 
	I1105 19:17:36.070486   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:17:36.070622   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:17:36.070690   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:17:36.070758   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:17:36.070767   74485 kubeadm.go:310] 
	I1105 19:17:36.071471   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:17:36.071558   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:17:36.071652   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1105 19:17:36.071791   74485 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1105 19:17:36.071838   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:17:36.527864   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:36.543211   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:17:36.552656   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:17:36.552676   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:17:36.552734   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:17:36.562296   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:17:36.562360   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:17:36.571759   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:17:36.580534   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:17:36.580586   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:17:36.590320   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.599165   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:17:36.599235   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.608340   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:17:36.616935   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:17:36.616986   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:17:36.625948   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:17:36.843267   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:19:32.770686   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:19:32.770828   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1105 19:19:32.772504   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:19:32.772564   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:19:32.772656   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:19:32.772784   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:19:32.772893   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:19:32.772971   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:19:32.774648   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:19:32.774726   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:19:32.774804   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:19:32.774902   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:19:32.775012   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:19:32.775144   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:19:32.775223   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:19:32.775307   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:19:32.775397   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:19:32.775487   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:19:32.775597   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:19:32.775651   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:19:32.775728   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:19:32.775796   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:19:32.775864   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:19:32.775961   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:19:32.776041   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:19:32.776175   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:19:32.776281   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:19:32.776330   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:19:32.776417   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:19:32.777837   74485 out.go:235]   - Booting up control plane ...
	I1105 19:19:32.777940   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:19:32.778032   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:19:32.778134   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:19:32.778248   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:19:32.778489   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:19:32.778563   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:19:32.778652   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.778960   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779080   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779302   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779399   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779663   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779766   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779990   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780051   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.780241   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780260   74485 kubeadm.go:310] 
	I1105 19:19:32.780325   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:19:32.780381   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:19:32.780391   74485 kubeadm.go:310] 
	I1105 19:19:32.780438   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:19:32.780486   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:19:32.780627   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:19:32.780639   74485 kubeadm.go:310] 
	I1105 19:19:32.780748   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:19:32.780790   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:19:32.780819   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:19:32.780825   74485 kubeadm.go:310] 
	I1105 19:19:32.780961   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:19:32.781048   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:19:32.781055   74485 kubeadm.go:310] 
	I1105 19:19:32.781144   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:19:32.781225   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:19:32.781293   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:19:32.781394   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:19:32.781475   74485 kubeadm.go:394] duration metric: took 8m1.792270232s to StartCluster
	I1105 19:19:32.781485   74485 kubeadm.go:310] 
	I1105 19:19:32.781522   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:19:32.781589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:19:32.825435   74485 cri.go:89] found id: ""
	I1105 19:19:32.825465   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.825475   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:19:32.825482   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:19:32.825543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:19:32.859245   74485 cri.go:89] found id: ""
	I1105 19:19:32.859275   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.859286   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:19:32.859293   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:19:32.859355   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:19:32.890801   74485 cri.go:89] found id: ""
	I1105 19:19:32.890833   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.890844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:19:32.890851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:19:32.890919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:19:32.925244   74485 cri.go:89] found id: ""
	I1105 19:19:32.925273   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.925280   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:19:32.925287   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:19:32.925352   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:19:32.959091   74485 cri.go:89] found id: ""
	I1105 19:19:32.959118   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.959129   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:19:32.959137   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:19:32.959191   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:19:32.990230   74485 cri.go:89] found id: ""
	I1105 19:19:32.990264   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.990276   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:19:32.990284   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:19:32.990343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:19:33.027461   74485 cri.go:89] found id: ""
	I1105 19:19:33.027494   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.027505   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:19:33.027512   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:19:33.027574   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:19:33.070819   74485 cri.go:89] found id: ""
	I1105 19:19:33.070847   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.070858   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:19:33.070869   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:19:33.070883   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:19:33.122580   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:19:33.122615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:19:33.136015   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:19:33.136043   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:19:33.213727   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:19:33.213750   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:19:33.213762   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:19:33.324287   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:19:33.324333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1105 19:19:33.384732   74485 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1105 19:19:33.384785   74485 out.go:270] * 
	W1105 19:19:33.384844   74485 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.384857   74485 out.go:270] * 
	W1105 19:19:33.385632   74485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:19:33.388860   74485 out.go:201] 
	W1105 19:19:33.390328   74485 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.390366   74485 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1105 19:19:33.390393   74485 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1105 19:19:33.391785   74485 out.go:201] 
	
	
	==> CRI-O <==
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.392071471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db1ad521-fc76-404c-809c-46569bb2f300 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.392261700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730833907317693608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f787ef550160cd97dd2407c47c75addf578d4904b03bfd41c5f802269baf23ce,PodSandboxId:da2dde857316d292ca9c103724dddd3d7db0d986385c2d838c513c127c5231e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730833887735884278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60cb45e2-148c-4641-8049-e602f75d631a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20,PodSandboxId:342009a1adef6608ff0764229692100d6b63bab6cdbf878d7e5960c66ce04890,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833884147657996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cdvml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b47fc10-0352-47df-aef2-46083091a840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730833876519682773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb,PodSandboxId:73f7d5a507be5fd1340a643fcd8265c9cdc6f2f590cb0435aedc57b7d250d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833876509802109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8v42c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 007c81ba-8ec7-4cdf-87a0
-17c9225a3aa0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9,PodSandboxId:36b1d0b1e93250b6af3e0e40f6f7b66f58428d6fe88d721e7e79ab57ce6eee94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833872477428273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd3f32dd8d97118149a2
df2f0aadf30,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2,PodSandboxId:71131a920b634bab593fe6e55a037a4518cfe14cba8b14db50888b8f99b35cd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833872478857514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8f14005173a948ad352e15e16d6b07a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e,PodSandboxId:a2394d68180b1447632bab3f0d18374c880a2cbe9705563d9f55c631a8c69ea6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833872465021873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12ce6f2d55a174f91207a80726b4
a106,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a,PodSandboxId:d3bb63c9509f73c748efd96d6a565d7000921c4888f9a777c6208ba2301d42bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833872457852974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9abf6fcb365f20056cd3e9b47141e2
d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db1ad521-fc76-404c-809c-46569bb2f300 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.431303905Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8258adc-e368-407b-9eec-2d0d5392e216 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.431388161Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8258adc-e368-407b-9eec-2d0d5392e216 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.432286249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79d0b128-ba78-4daa-9d86-c30581d9d0a4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.432669579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834681432647473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79d0b128-ba78-4daa-9d86-c30581d9d0a4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.433187464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e902819-ba4b-4a94-8dc4-b7f1d55bcb0f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.433241364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e902819-ba4b-4a94-8dc4-b7f1d55bcb0f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.433437629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730833907317693608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f787ef550160cd97dd2407c47c75addf578d4904b03bfd41c5f802269baf23ce,PodSandboxId:da2dde857316d292ca9c103724dddd3d7db0d986385c2d838c513c127c5231e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730833887735884278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60cb45e2-148c-4641-8049-e602f75d631a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20,PodSandboxId:342009a1adef6608ff0764229692100d6b63bab6cdbf878d7e5960c66ce04890,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833884147657996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cdvml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b47fc10-0352-47df-aef2-46083091a840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730833876519682773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb,PodSandboxId:73f7d5a507be5fd1340a643fcd8265c9cdc6f2f590cb0435aedc57b7d250d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833876509802109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8v42c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 007c81ba-8ec7-4cdf-87a0
-17c9225a3aa0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9,PodSandboxId:36b1d0b1e93250b6af3e0e40f6f7b66f58428d6fe88d721e7e79ab57ce6eee94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833872477428273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd3f32dd8d97118149a2
df2f0aadf30,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2,PodSandboxId:71131a920b634bab593fe6e55a037a4518cfe14cba8b14db50888b8f99b35cd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833872478857514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8f14005173a948ad352e15e16d6b07a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e,PodSandboxId:a2394d68180b1447632bab3f0d18374c880a2cbe9705563d9f55c631a8c69ea6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833872465021873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12ce6f2d55a174f91207a80726b4
a106,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a,PodSandboxId:d3bb63c9509f73c748efd96d6a565d7000921c4888f9a777c6208ba2301d42bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833872457852974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9abf6fcb365f20056cd3e9b47141e2
d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e902819-ba4b-4a94-8dc4-b7f1d55bcb0f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.461302875Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=f376d34c-73eb-44da-b6af-84e18ad80478 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.461385510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f376d34c-73eb-44da-b6af-84e18ad80478 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.472594740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b0865c4-148f-4894-858d-56f02547905d name=/runtime.v1.RuntimeService/Version
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.472667821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b0865c4-148f-4894-858d-56f02547905d name=/runtime.v1.RuntimeService/Version
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.473848809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=158b9dba-128c-4ac5-8998-e04718fc401c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.474280146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834681474256807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=158b9dba-128c-4ac5-8998-e04718fc401c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.474804234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6547a173-9934-4c56-9839-30fbd3cb8360 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.474852260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6547a173-9934-4c56-9839-30fbd3cb8360 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.475072925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730833907317693608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f787ef550160cd97dd2407c47c75addf578d4904b03bfd41c5f802269baf23ce,PodSandboxId:da2dde857316d292ca9c103724dddd3d7db0d986385c2d838c513c127c5231e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730833887735884278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60cb45e2-148c-4641-8049-e602f75d631a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20,PodSandboxId:342009a1adef6608ff0764229692100d6b63bab6cdbf878d7e5960c66ce04890,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833884147657996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cdvml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b47fc10-0352-47df-aef2-46083091a840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730833876519682773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb,PodSandboxId:73f7d5a507be5fd1340a643fcd8265c9cdc6f2f590cb0435aedc57b7d250d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833876509802109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8v42c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 007c81ba-8ec7-4cdf-87a0
-17c9225a3aa0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9,PodSandboxId:36b1d0b1e93250b6af3e0e40f6f7b66f58428d6fe88d721e7e79ab57ce6eee94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833872477428273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd3f32dd8d97118149a2
df2f0aadf30,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2,PodSandboxId:71131a920b634bab593fe6e55a037a4518cfe14cba8b14db50888b8f99b35cd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833872478857514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8f14005173a948ad352e15e16d6b07a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e,PodSandboxId:a2394d68180b1447632bab3f0d18374c880a2cbe9705563d9f55c631a8c69ea6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833872465021873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12ce6f2d55a174f91207a80726b4
a106,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a,PodSandboxId:d3bb63c9509f73c748efd96d6a565d7000921c4888f9a777c6208ba2301d42bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833872457852974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9abf6fcb365f20056cd3e9b47141e2
d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6547a173-9934-4c56-9839-30fbd3cb8360 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.506563672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c253d78-bfbc-4efe-b5cf-98e98651c1c7 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.506638862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c253d78-bfbc-4efe-b5cf-98e98651c1c7 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.507495428Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0ffca69-19a7-4fdc-92df-4159ee0ccef9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.508184143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834681508152507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0ffca69-19a7-4fdc-92df-4159ee0ccef9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.508719968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eeea8469-4802-495c-b8ba-ea7e9db4682f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.508788503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eeea8469-4802-495c-b8ba-ea7e9db4682f name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:24:41 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:24:41.509024820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730833907317693608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f787ef550160cd97dd2407c47c75addf578d4904b03bfd41c5f802269baf23ce,PodSandboxId:da2dde857316d292ca9c103724dddd3d7db0d986385c2d838c513c127c5231e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730833887735884278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60cb45e2-148c-4641-8049-e602f75d631a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20,PodSandboxId:342009a1adef6608ff0764229692100d6b63bab6cdbf878d7e5960c66ce04890,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833884147657996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cdvml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b47fc10-0352-47df-aef2-46083091a840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730833876519682773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb,PodSandboxId:73f7d5a507be5fd1340a643fcd8265c9cdc6f2f590cb0435aedc57b7d250d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833876509802109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8v42c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 007c81ba-8ec7-4cdf-87a0
-17c9225a3aa0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9,PodSandboxId:36b1d0b1e93250b6af3e0e40f6f7b66f58428d6fe88d721e7e79ab57ce6eee94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833872477428273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd3f32dd8d97118149a2
df2f0aadf30,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2,PodSandboxId:71131a920b634bab593fe6e55a037a4518cfe14cba8b14db50888b8f99b35cd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833872478857514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8f14005173a948ad352e15e16d6b07a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e,PodSandboxId:a2394d68180b1447632bab3f0d18374c880a2cbe9705563d9f55c631a8c69ea6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833872465021873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12ce6f2d55a174f91207a80726b4
a106,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a,PodSandboxId:d3bb63c9509f73c748efd96d6a565d7000921c4888f9a777c6208ba2301d42bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833872457852974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9abf6fcb365f20056cd3e9b47141e2
d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eeea8469-4802-495c-b8ba-ea7e9db4682f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	44080c0e289a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   3adea7b8362a4       storage-provisioner
	f787ef550160c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   da2dde857316d       busybox
	531bb8d98703d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   342009a1adef6       coredns-7c65d6cfc9-cdvml
	6039942d4d993       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   3adea7b8362a4       storage-provisioner
	e8180f551c559       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   73f7d5a507be5       kube-proxy-8v42c
	4a77037302cd0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   71131a920b634       kube-controller-manager-default-k8s-diff-port-608095
	a8de930573a64       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   36b1d0b1e9325       kube-apiserver-default-k8s-diff-port-608095
	e6393e5b4069d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   a2394d68180b1       etcd-default-k8s-diff-port-608095
	6bf66f706c934       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   d3bb63c9509f7       kube-scheduler-default-k8s-diff-port-608095
	
	
	==> coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38179 - 22427 "HINFO IN 2591781970772243088.4480814410341590386. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009728045s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-608095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-608095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=default-k8s-diff-port-608095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T19_03_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 19:03:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-608095
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 19:24:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 19:21:58 +0000   Tue, 05 Nov 2024 19:03:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 19:21:58 +0000   Tue, 05 Nov 2024 19:03:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 19:21:58 +0000   Tue, 05 Nov 2024 19:03:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 19:21:58 +0000   Tue, 05 Nov 2024 19:11:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.10
	  Hostname:    default-k8s-diff-port-608095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e79da7c0acd44febfe2af835f76cda4
	  System UUID:                1e79da7c-0acd-44fe-bfe2-af835f76cda4
	  Boot ID:                    b61422b5-93e7-47ec-a4bc-d57993931982
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-cdvml                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-608095                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-608095             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-608095    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-8v42c                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-608095             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-44mcg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-608095 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-608095 event: Registered Node default-k8s-diff-port-608095 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-608095 event: Registered Node default-k8s-diff-port-608095 in Controller
	
	
	==> dmesg <==
	[Nov 5 19:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057037] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046641] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920435] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.899616] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.351819] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov 5 19:11] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.056219] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066421] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.189159] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.134552] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.297918] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[  +4.042158] systemd-fstab-generator[789]: Ignoring "noauto" option for root device
	[  +1.983351] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +0.059896] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.589380] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.819767] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +3.870543] kauditd_printk_skb: 64 callbacks suppressed
	[ +24.202161] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] <==
	{"level":"info","ts":"2024-11-05T19:11:14.803788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48aebc8897d84757 received MsgVoteResp from 48aebc8897d84757 at term 3"}
	{"level":"info","ts":"2024-11-05T19:11:14.803798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48aebc8897d84757 became leader at term 3"}
	{"level":"info","ts":"2024-11-05T19:11:14.803805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 48aebc8897d84757 elected leader 48aebc8897d84757 at term 3"}
	{"level":"info","ts":"2024-11-05T19:11:14.814674Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:11:14.814755Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"48aebc8897d84757","local-member-attributes":"{Name:default-k8s-diff-port-608095 ClientURLs:[https://192.168.50.10:2379]}","request-path":"/0/members/48aebc8897d84757/attributes","cluster-id":"e20af935bb2270cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T19:11:14.814896Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:11:14.816197Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:11:14.816631Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:11:14.817434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.10:2379"}
	{"level":"info","ts":"2024-11-05T19:11:14.818013Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-05T19:11:14.818164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T19:11:14.818202Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-11-05T19:11:31.446327Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.956271ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5140739118161463722 > lease_revoke:<id:475792fdb5de1a93>","response":"size:28"}
	{"level":"warn","ts":"2024-11-05T19:11:31.598385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.941287ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5140739118161463723 > lease_revoke:<id:475792fdb5de1a57>","response":"size:28"}
	{"level":"info","ts":"2024-11-05T19:11:31.598519Z","caller":"traceutil/trace.go:171","msg":"trace[682513589] linearizableReadLoop","detail":"{readStateIndex:692; appliedIndex:690; }","duration":"539.487941ms","start":"2024-11-05T19:11:31.059016Z","end":"2024-11-05T19:11:31.598504Z","steps":["trace[682513589] 'read index received'  (duration: 153.756839ms)","trace[682513589] 'applied index is now lower than readState.Index'  (duration: 385.730299ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T19:11:31.598850Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"539.819562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-11-05T19:11:31.599240Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.420273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-44mcg\" ","response":"range_response_count:1 size:4394"}
	{"level":"info","ts":"2024-11-05T19:11:31.599296Z","caller":"traceutil/trace.go:171","msg":"trace[1330847336] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-44mcg; range_end:; response_count:1; response_revision:650; }","duration":"387.475771ms","start":"2024-11-05T19:11:31.211805Z","end":"2024-11-05T19:11:31.599280Z","steps":["trace[1330847336] 'agreement among raft nodes before linearized reading'  (duration: 387.338302ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T19:11:31.599328Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T19:11:31.211749Z","time spent":"387.571043ms","remote":"127.0.0.1:38242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4417,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-44mcg\" "}
	{"level":"info","ts":"2024-11-05T19:11:31.599253Z","caller":"traceutil/trace.go:171","msg":"trace[317850113] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:650; }","duration":"540.243525ms","start":"2024-11-05T19:11:31.058997Z","end":"2024-11-05T19:11:31.599241Z","steps":["trace[317850113] 'agreement among raft nodes before linearized reading'  (duration: 539.767144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T19:11:31.599499Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T19:11:31.058955Z","time spent":"540.517834ms","remote":"127.0.0.1:38024","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-11-05T19:11:52.623219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.131604ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5140739118161463910 > lease_revoke:<id:475792fdbcf809db>","response":"size:28"}
	{"level":"info","ts":"2024-11-05T19:21:14.846527Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":893}
	{"level":"info","ts":"2024-11-05T19:21:14.862565Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":893,"took":"15.4884ms","hash":2176153668,"current-db-size-bytes":2727936,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2727936,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-11-05T19:21:14.862677Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2176153668,"revision":893,"compact-revision":-1}
	
	
	==> kernel <==
	 19:24:41 up 13 min,  0 users,  load average: 0.17, 0.13, 0.09
	Linux default-k8s-diff-port-608095 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1105 19:21:17.106241       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:21:17.106483       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1105 19:21:17.107629       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:21:17.107637       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:22:17.108775       1 handler_proxy.go:99] no RequestInfo found in the context
	W1105 19:22:17.109057       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:22:17.109057       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1105 19:22:17.109122       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:22:17.110271       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:22:17.110341       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:24:17.111345       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:24:17.111640       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1105 19:24:17.111726       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:24:17.111817       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:24:17.113094       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:24:17.113249       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] <==
	E1105 19:19:19.726850       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:19:20.188853       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:19:49.732303       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:19:50.196994       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:20:19.738558       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:20:20.203835       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:20:49.746171       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:20:50.211121       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:21:19.752789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:21:20.218380       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:21:49.759111       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:21:50.226627       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:21:58.788249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-608095"
	E1105 19:22:19.765670       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:22:20.234502       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:22:21.130399       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="262.275µs"
	I1105 19:22:32.126350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="51.646µs"
	E1105 19:22:49.771126       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:22:50.242787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:23:19.777820       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:23:20.250289       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:23:49.784034       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:23:50.257613       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:24:19.790418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:24:20.265545       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 19:11:16.695877       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 19:11:16.705281       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.10"]
	E1105 19:11:16.705458       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 19:11:16.732445       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 19:11:16.732504       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 19:11:16.732540       1 server_linux.go:169] "Using iptables Proxier"
	I1105 19:11:16.734792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 19:11:16.735713       1 server.go:483] "Version info" version="v1.31.2"
	I1105 19:11:16.735745       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:11:16.740088       1 config.go:199] "Starting service config controller"
	I1105 19:11:16.740163       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 19:11:16.740263       1 config.go:105] "Starting endpoint slice config controller"
	I1105 19:11:16.740310       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 19:11:16.740897       1 config.go:328] "Starting node config controller"
	I1105 19:11:16.742791       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 19:11:16.746026       1 shared_informer.go:320] Caches are synced for node config
	I1105 19:11:16.840813       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 19:11:16.840856       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] <==
	I1105 19:11:13.121832       1 serving.go:386] Generated self-signed cert in-memory
	W1105 19:11:16.054474       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1105 19:11:16.054685       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1105 19:11:16.054768       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1105 19:11:16.054797       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1105 19:11:16.115642       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1105 19:11:16.122998       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:11:16.125232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1105 19:11:16.125306       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 19:11:16.125382       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1105 19:11:16.125475       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1105 19:11:16.227225       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 19:23:31 default-k8s-diff-port-608095 kubelet[918]: E1105 19:23:31.308216     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834611307737933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:23:41 default-k8s-diff-port-608095 kubelet[918]: E1105 19:23:41.309869     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834621309536078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:23:41 default-k8s-diff-port-608095 kubelet[918]: E1105 19:23:41.310432     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834621309536078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:23:42 default-k8s-diff-port-608095 kubelet[918]: E1105 19:23:42.113067     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-44mcg" podUID="1af2bd4e-49d9-4126-9192-7d2697e2a601"
	Nov 05 19:23:51 default-k8s-diff-port-608095 kubelet[918]: E1105 19:23:51.313172     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834631312527967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:23:51 default-k8s-diff-port-608095 kubelet[918]: E1105 19:23:51.313229     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834631312527967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:23:57 default-k8s-diff-port-608095 kubelet[918]: E1105 19:23:57.114318     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-44mcg" podUID="1af2bd4e-49d9-4126-9192-7d2697e2a601"
	Nov 05 19:24:01 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:01.314590     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834641314302671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:01 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:01.314619     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834641314302671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:11 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:11.113727     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-44mcg" podUID="1af2bd4e-49d9-4126-9192-7d2697e2a601"
	Nov 05 19:24:11 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:11.142795     918 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 19:24:11 default-k8s-diff-port-608095 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 19:24:11 default-k8s-diff-port-608095 kubelet[918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 19:24:11 default-k8s-diff-port-608095 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 19:24:11 default-k8s-diff-port-608095 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 19:24:11 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:11.316149     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834651315819216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:11 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:11.316187     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834651315819216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:21 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:21.318987     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834661318497279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:21 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:21.319293     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834661318497279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:24 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:24.112862     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-44mcg" podUID="1af2bd4e-49d9-4126-9192-7d2697e2a601"
	Nov 05 19:24:31 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:31.321599     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834671321243060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:31 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:31.321879     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834671321243060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:38 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:38.112526     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-44mcg" podUID="1af2bd4e-49d9-4126-9192-7d2697e2a601"
	Nov 05 19:24:41 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:41.323953     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834681323661998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:41 default-k8s-diff-port-608095 kubelet[918]: E1105 19:24:41.323992     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834681323661998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] <==
	I1105 19:11:47.408115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 19:11:47.418124       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 19:11:47.418210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 19:12:04.817333       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 19:12:04.818325       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-608095_752a1b62-485c-40c7-9644-380ce41ccb9d!
	I1105 19:12:04.818739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87ec9435-bf7d-4318-aa0b-da7b3dfced1b", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-608095_752a1b62-485c-40c7-9644-380ce41ccb9d became leader
	I1105 19:12:04.919567       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-608095_752a1b62-485c-40c7-9644-380ce41ccb9d!
	
	
	==> storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] <==
	I1105 19:11:16.621589       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1105 19:11:46.625190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-608095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-44mcg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-608095 describe pod metrics-server-6867b74b74-44mcg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-608095 describe pod metrics-server-6867b74b74-44mcg: exit status 1 (62.983618ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-44mcg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-608095 describe pod metrics-server-6867b74b74-44mcg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1105 19:16:37.462499   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-271881 -n embed-certs-271881
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-11-05 19:25:04.687204257 +0000 UTC m=+6238.359928950
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271881 -n embed-certs-271881
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-271881 logs -n 25
E1105 19:25:05.695052   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-271881 logs -n 25: (2.073567848s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-929548 sudo cat                              | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo find                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo crio                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-929548                                       | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-537175 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | disable-driver-mounts-537175                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:04 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-459223             | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-271881            | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-608095  | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC | 05 Nov 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-459223                  | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-271881                 | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-567666        | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-608095       | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:15 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-567666             | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 19:07:52.649090   74485 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:07:52.649200   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649205   74485 out.go:358] Setting ErrFile to fd 2...
	I1105 19:07:52.649210   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649374   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:07:52.649909   74485 out.go:352] Setting JSON to false
	I1105 19:07:52.650785   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6615,"bootTime":1730827058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:07:52.650878   74485 start.go:139] virtualization: kvm guest
	I1105 19:07:52.652866   74485 out.go:177] * [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:07:52.654107   74485 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:07:52.654107   74485 notify.go:220] Checking for updates...
	I1105 19:07:52.655282   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:07:52.656379   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:07:52.657451   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:07:52.658694   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:07:52.659835   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:07:52.661251   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:07:52.661622   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.661660   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.677005   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I1105 19:07:52.677521   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.678096   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.678118   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.678489   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.678735   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.680466   74485 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1105 19:07:52.681734   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:07:52.682087   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.682139   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.697071   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1105 19:07:52.697503   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.697958   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.697980   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.698259   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.698439   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.732962   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 19:07:52.734079   74485 start.go:297] selected driver: kvm2
	I1105 19:07:52.734094   74485 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.734209   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:07:52.734912   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.735038   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:07:52.750214   74485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:07:52.750609   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:07:52.750641   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:07:52.750697   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:07:52.750745   74485 start.go:340] cluster config:
	{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.750877   74485 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.753288   74485 out.go:177] * Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	I1105 19:07:50.739209   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:53.811246   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:52.754354   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:07:52.754407   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 19:07:52.754425   74485 cache.go:56] Caching tarball of preloaded images
	I1105 19:07:52.754503   74485 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:07:52.754515   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 19:07:52.754610   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:07:52.754817   74485 start.go:360] acquireMachinesLock for old-k8s-version-567666: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:07:59.891257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:02.963247   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:09.043263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:12.115289   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:18.195275   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:21.267213   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:27.347251   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:30.419240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:36.499291   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:39.571255   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:45.651258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:48.723262   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:54.803265   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:57.875236   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:03.955241   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:07.027229   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:13.107258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:16.179257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:22.259227   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:25.331263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:31.411234   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:34.483240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:40.563258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:43.635253   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:49.715287   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:52.787276   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:58.867242   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:01.939296   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:08.019268   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:11.091350   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:17.171266   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:20.243245   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:23.247511   73732 start.go:364] duration metric: took 4m30.277290481s to acquireMachinesLock for "embed-certs-271881"
	I1105 19:10:23.247565   73732 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:23.247590   73732 fix.go:54] fixHost starting: 
	I1105 19:10:23.248173   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:23.248235   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:23.263573   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I1105 19:10:23.264016   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:23.264437   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:10:23.264461   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:23.264888   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:23.265122   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:23.265311   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:10:23.267000   73732 fix.go:112] recreateIfNeeded on embed-certs-271881: state=Stopped err=<nil>
	I1105 19:10:23.267031   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	W1105 19:10:23.267183   73732 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:23.269188   73732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-271881" ...
	I1105 19:10:23.244961   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:23.245021   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245327   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:10:23.245352   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245536   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:10:23.247352   73496 machine.go:96] duration metric: took 4m37.425023044s to provisionDockerMachine
	I1105 19:10:23.247393   73496 fix.go:56] duration metric: took 4m37.446801616s for fixHost
	I1105 19:10:23.247400   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 4m37.446835698s
	W1105 19:10:23.247424   73496 start.go:714] error starting host: provision: host is not running
	W1105 19:10:23.247522   73496 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1105 19:10:23.247534   73496 start.go:729] Will try again in 5 seconds ...
	I1105 19:10:23.270443   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Start
	I1105 19:10:23.270681   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring networks are active...
	I1105 19:10:23.271552   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network default is active
	I1105 19:10:23.271924   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network mk-embed-certs-271881 is active
	I1105 19:10:23.272243   73732 main.go:141] libmachine: (embed-certs-271881) Getting domain xml...
	I1105 19:10:23.273027   73732 main.go:141] libmachine: (embed-certs-271881) Creating domain...
	I1105 19:10:24.503219   73732 main.go:141] libmachine: (embed-certs-271881) Waiting to get IP...
	I1105 19:10:24.504067   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.504444   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.504503   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.504415   75020 retry.go:31] will retry after 194.539819ms: waiting for machine to come up
	I1105 19:10:24.701086   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.701552   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.701579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.701501   75020 retry.go:31] will retry after 361.371677ms: waiting for machine to come up
	I1105 19:10:25.064078   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.064484   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.064512   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.064433   75020 retry.go:31] will retry after 442.206433ms: waiting for machine to come up
	I1105 19:10:25.507981   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.508380   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.508405   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.508338   75020 retry.go:31] will retry after 573.453662ms: waiting for machine to come up
	I1105 19:10:26.083299   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.083727   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.083753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.083670   75020 retry.go:31] will retry after 686.210957ms: waiting for machine to come up
	I1105 19:10:26.771637   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.772070   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.772112   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.772062   75020 retry.go:31] will retry after 685.825223ms: waiting for machine to come up
	I1105 19:10:27.459230   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:27.459652   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:27.459677   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:27.459616   75020 retry.go:31] will retry after 1.167971852s: waiting for machine to come up
	I1105 19:10:28.247729   73496 start.go:360] acquireMachinesLock for no-preload-459223: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:10:28.629194   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:28.629526   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:28.629549   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:28.629488   75020 retry.go:31] will retry after 1.180980288s: waiting for machine to come up
	I1105 19:10:29.812048   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:29.812445   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:29.812475   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:29.812390   75020 retry.go:31] will retry after 1.527253183s: waiting for machine to come up
	I1105 19:10:31.342147   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:31.342519   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:31.342546   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:31.342467   75020 retry.go:31] will retry after 1.597485878s: waiting for machine to come up
	I1105 19:10:32.942141   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:32.942459   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:32.942505   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:32.942431   75020 retry.go:31] will retry after 2.416793509s: waiting for machine to come up
	I1105 19:10:35.360354   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:35.360717   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:35.360743   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:35.360674   75020 retry.go:31] will retry after 3.193637492s: waiting for machine to come up
	I1105 19:10:38.556294   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:38.556744   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:38.556775   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:38.556673   75020 retry.go:31] will retry after 3.819760443s: waiting for machine to come up
	I1105 19:10:42.380607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381140   73732 main.go:141] libmachine: (embed-certs-271881) Found IP for machine: 192.168.39.58
	I1105 19:10:42.381172   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has current primary IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381196   73732 main.go:141] libmachine: (embed-certs-271881) Reserving static IP address...
	I1105 19:10:42.381607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.381634   73732 main.go:141] libmachine: (embed-certs-271881) Reserved static IP address: 192.168.39.58
	I1105 19:10:42.381647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | skip adding static IP to network mk-embed-certs-271881 - found existing host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"}
	I1105 19:10:42.381671   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Getting to WaitForSSH function...
	I1105 19:10:42.381686   73732 main.go:141] libmachine: (embed-certs-271881) Waiting for SSH to be available...
	I1105 19:10:42.383908   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384306   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.384333   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384427   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH client type: external
	I1105 19:10:42.384458   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa (-rw-------)
	I1105 19:10:42.384486   73732 main.go:141] libmachine: (embed-certs-271881) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:10:42.384502   73732 main.go:141] libmachine: (embed-certs-271881) DBG | About to run SSH command:
	I1105 19:10:42.384510   73732 main.go:141] libmachine: (embed-certs-271881) DBG | exit 0
	I1105 19:10:42.506807   73732 main.go:141] libmachine: (embed-certs-271881) DBG | SSH cmd err, output: <nil>: 
	I1105 19:10:42.507217   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetConfigRaw
	I1105 19:10:42.507868   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.510314   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.510680   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510936   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/config.json ...
	I1105 19:10:42.511183   73732 machine.go:93] provisionDockerMachine start ...
	I1105 19:10:42.511203   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:42.511426   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.513759   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514111   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.514144   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514290   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.514473   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514654   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514827   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.514979   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.515191   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.515202   73732 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:10:42.619241   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:10:42.619273   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619517   73732 buildroot.go:166] provisioning hostname "embed-certs-271881"
	I1105 19:10:42.619555   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619735   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.622695   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623117   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.623146   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623304   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.623465   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623632   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623825   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.623957   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.624122   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.624135   73732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-271881 && echo "embed-certs-271881" | sudo tee /etc/hostname
	I1105 19:10:42.740722   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-271881
	
	I1105 19:10:42.740749   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.743579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.743922   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.743945   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.744160   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.744343   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744470   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.744756   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.744950   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.744972   73732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-271881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-271881/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-271881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:10:42.854869   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:42.854898   73732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:10:42.854926   73732 buildroot.go:174] setting up certificates
	I1105 19:10:42.854940   73732 provision.go:84] configureAuth start
	I1105 19:10:42.854948   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.855222   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.857913   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858228   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.858252   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858440   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.860753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861041   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.861062   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861222   73732 provision.go:143] copyHostCerts
	I1105 19:10:42.861274   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:10:42.861291   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:10:42.861385   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:10:42.861543   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:10:42.861556   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:10:42.861595   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:10:42.861671   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:10:42.861681   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:10:42.861713   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:10:42.861781   73732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.embed-certs-271881 san=[127.0.0.1 192.168.39.58 embed-certs-271881 localhost minikube]
	I1105 19:10:43.659372   74141 start.go:364] duration metric: took 3m39.006624915s to acquireMachinesLock for "default-k8s-diff-port-608095"
	I1105 19:10:43.659450   74141 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:43.659458   74141 fix.go:54] fixHost starting: 
	I1105 19:10:43.659814   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:43.659874   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:43.677604   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I1105 19:10:43.678132   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:43.678624   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:10:43.678649   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:43.679047   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:43.679250   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:10:43.679407   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:10:43.681036   74141 fix.go:112] recreateIfNeeded on default-k8s-diff-port-608095: state=Stopped err=<nil>
	I1105 19:10:43.681063   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	W1105 19:10:43.681194   74141 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:43.683110   74141 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-608095" ...
	I1105 19:10:43.684451   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Start
	I1105 19:10:43.684639   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring networks are active...
	I1105 19:10:43.685436   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network default is active
	I1105 19:10:43.685983   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network mk-default-k8s-diff-port-608095 is active
	I1105 19:10:43.686398   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Getting domain xml...
	I1105 19:10:43.687143   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Creating domain...
	I1105 19:10:43.044648   73732 provision.go:177] copyRemoteCerts
	I1105 19:10:43.044703   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:10:43.044730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.047120   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047506   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.047538   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047717   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.047886   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.048037   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.048186   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.129098   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:10:43.154632   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1105 19:10:43.179681   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 19:10:43.205598   73732 provision.go:87] duration metric: took 350.648117ms to configureAuth
	I1105 19:10:43.205622   73732 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:10:43.205822   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:10:43.205900   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.208446   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.208763   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.208799   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.209006   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.209190   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209489   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.209611   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.209828   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.209850   73732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:10:43.431540   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:10:43.431569   73732 machine.go:96] duration metric: took 920.370689ms to provisionDockerMachine
	I1105 19:10:43.431582   73732 start.go:293] postStartSetup for "embed-certs-271881" (driver="kvm2")
	I1105 19:10:43.431595   73732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:10:43.431617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.431912   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:10:43.431940   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.434821   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435170   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.435193   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435338   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.435532   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.435714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.435851   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.517391   73732 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:10:43.521532   73732 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:10:43.521553   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:10:43.521632   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:10:43.521721   73732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:10:43.521839   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:10:43.531045   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:43.556596   73732 start.go:296] duration metric: took 125.000692ms for postStartSetup
	I1105 19:10:43.556634   73732 fix.go:56] duration metric: took 20.309059136s for fixHost
	I1105 19:10:43.556663   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.558888   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559181   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.559220   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.559531   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559674   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.559934   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.560096   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.560106   73732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:10:43.659219   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833843.637801657
	
	I1105 19:10:43.659240   73732 fix.go:216] guest clock: 1730833843.637801657
	I1105 19:10:43.659247   73732 fix.go:229] Guest: 2024-11-05 19:10:43.637801657 +0000 UTC Remote: 2024-11-05 19:10:43.556637855 +0000 UTC m=+290.729857868 (delta=81.163802ms)
	I1105 19:10:43.659284   73732 fix.go:200] guest clock delta is within tolerance: 81.163802ms
	I1105 19:10:43.659290   73732 start.go:83] releasing machines lock for "embed-certs-271881", held for 20.411743975s
	I1105 19:10:43.659324   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.659589   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:43.662581   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663025   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.663058   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663214   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663907   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.664017   73732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:10:43.664057   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.664108   73732 ssh_runner.go:195] Run: cat /version.json
	I1105 19:10:43.664131   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.666998   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667059   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667365   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667395   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667424   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667438   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667543   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667638   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667897   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667968   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667996   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.668078   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.775067   73732 ssh_runner.go:195] Run: systemctl --version
	I1105 19:10:43.780892   73732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:10:43.919564   73732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:10:43.926362   73732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:10:43.926422   73732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:10:43.942359   73732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:10:43.942378   73732 start.go:495] detecting cgroup driver to use...
	I1105 19:10:43.942450   73732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:10:43.964650   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:10:43.980651   73732 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:10:43.980717   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:10:43.993988   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:10:44.007440   73732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:10:44.132040   73732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:10:44.314220   73732 docker.go:233] disabling docker service ...
	I1105 19:10:44.314294   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:10:44.337362   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:10:44.351277   73732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:10:44.485105   73732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:10:44.621596   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:10:44.636254   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:10:44.656530   73732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:10:44.656595   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.667156   73732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:10:44.667237   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.682233   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.692814   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.704688   73732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:10:44.721662   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.738629   73732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.754944   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.765089   73732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:10:44.774147   73732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:10:44.774210   73732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:10:44.786312   73732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:10:44.795892   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:44.926823   73732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:10:45.022945   73732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:10:45.023042   73732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:10:45.027389   73732 start.go:563] Will wait 60s for crictl version
	I1105 19:10:45.027451   73732 ssh_runner.go:195] Run: which crictl
	I1105 19:10:45.030701   73732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:10:45.067294   73732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:10:45.067410   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.094394   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.123459   73732 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:10:45.124645   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:45.127396   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.127794   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:45.127833   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.128104   73732 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 19:10:45.131923   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:45.143951   73732 kubeadm.go:883] updating cluster {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:10:45.144078   73732 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:10:45.144125   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:45.177770   73732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:10:45.177830   73732 ssh_runner.go:195] Run: which lz4
	I1105 19:10:45.181571   73732 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:10:45.186569   73732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:10:45.186602   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:10:46.442865   73732 crio.go:462] duration metric: took 1.26132812s to copy over tarball
	I1105 19:10:46.442959   73732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:10:44.962206   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting to get IP...
	I1105 19:10:44.963032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963397   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963492   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:44.963380   75165 retry.go:31] will retry after 274.297859ms: waiting for machine to come up
	I1105 19:10:45.239024   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239453   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.239406   75165 retry.go:31] will retry after 239.892312ms: waiting for machine to come up
	I1105 19:10:45.481036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481584   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.481569   75165 retry.go:31] will retry after 360.538082ms: waiting for machine to come up
	I1105 19:10:45.844144   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844565   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844596   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.844533   75165 retry.go:31] will retry after 387.597088ms: waiting for machine to come up
	I1105 19:10:46.234241   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234798   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.234738   75165 retry.go:31] will retry after 597.596298ms: waiting for machine to come up
	I1105 19:10:46.833721   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834170   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.834142   75165 retry.go:31] will retry after 688.240413ms: waiting for machine to come up
	I1105 19:10:47.523898   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524412   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524442   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:47.524377   75165 retry.go:31] will retry after 826.38207ms: waiting for machine to come up
	I1105 19:10:48.352258   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352787   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352809   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:48.352681   75165 retry.go:31] will retry after 1.381579847s: waiting for machine to come up
	I1105 19:10:48.547186   73732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104175993s)
	I1105 19:10:48.547221   73732 crio.go:469] duration metric: took 2.104326973s to extract the tarball
	I1105 19:10:48.547231   73732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:10:48.583027   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:48.630180   73732 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:10:48.630208   73732 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:10:48.630218   73732 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.31.2 crio true true} ...
	I1105 19:10:48.630349   73732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-271881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:10:48.630412   73732 ssh_runner.go:195] Run: crio config
	I1105 19:10:48.682182   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:48.682204   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:48.682213   73732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:10:48.682232   73732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-271881 NodeName:embed-certs-271881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:10:48.682354   73732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-271881"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:10:48.682412   73732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:10:48.691968   73732 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:10:48.692031   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:10:48.700980   73732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:10:48.716797   73732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:10:48.732408   73732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1105 19:10:48.748354   73732 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1105 19:10:48.751791   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:48.763068   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:48.893747   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:10:48.910247   73732 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881 for IP: 192.168.39.58
	I1105 19:10:48.910270   73732 certs.go:194] generating shared ca certs ...
	I1105 19:10:48.910303   73732 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:10:48.910488   73732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:10:48.910547   73732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:10:48.910561   73732 certs.go:256] generating profile certs ...
	I1105 19:10:48.910673   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/client.key
	I1105 19:10:48.910768   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key.0a454894
	I1105 19:10:48.910837   73732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key
	I1105 19:10:48.911021   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:10:48.911059   73732 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:10:48.911071   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:10:48.911116   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:10:48.911160   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:10:48.911196   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:10:48.911265   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:48.912104   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:10:48.969066   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:10:49.000713   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:10:49.040367   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:10:49.068456   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1105 19:10:49.094166   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:10:49.115986   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:10:49.137770   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:10:49.161140   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:10:49.182996   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:10:49.206578   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:10:49.230006   73732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:10:49.245835   73732 ssh_runner.go:195] Run: openssl version
	I1105 19:10:49.251252   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:10:49.261237   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265318   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265398   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.270753   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:10:49.280568   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:10:49.290580   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294567   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294644   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.299812   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:10:49.309398   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:10:49.319451   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323490   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323543   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.328708   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:10:49.338805   73732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:10:49.342918   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:10:49.348526   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:10:49.353943   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:10:49.359527   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:10:49.364886   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:10:49.370119   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:10:49.375437   73732 kubeadm.go:392] StartCluster: {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:10:49.375531   73732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:10:49.375572   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.415844   73732 cri.go:89] found id: ""
	I1105 19:10:49.415916   73732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:10:49.425336   73732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:10:49.425402   73732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:10:49.425474   73732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:10:49.434717   73732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:10:49.435831   73732 kubeconfig.go:125] found "embed-certs-271881" server: "https://192.168.39.58:8443"
	I1105 19:10:49.437903   73732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:10:49.446625   73732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I1105 19:10:49.446657   73732 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:10:49.446668   73732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:10:49.446732   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.479546   73732 cri.go:89] found id: ""
	I1105 19:10:49.479639   73732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:10:49.499034   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:10:49.510134   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:10:49.510159   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:10:49.510203   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:10:49.520482   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:10:49.520544   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:10:49.530750   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:10:49.539113   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:10:49.539183   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:10:49.548104   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.556754   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:10:49.556811   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.565606   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:10:49.574023   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:10:49.574091   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:10:49.582888   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:10:49.591876   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:49.688517   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.070191   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.38163928s)
	I1105 19:10:51.070240   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.267774   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.329051   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.406120   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:10:51.406226   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:51.907080   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:52.406468   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:49.735558   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735923   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735987   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:49.735914   75165 retry.go:31] will retry after 1.132319443s: waiting for machine to come up
	I1105 19:10:50.870267   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870770   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870801   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:50.870715   75165 retry.go:31] will retry after 1.791598796s: waiting for machine to come up
	I1105 19:10:52.664538   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665055   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:52.664912   75165 retry.go:31] will retry after 1.910294965s: waiting for machine to come up
	I1105 19:10:52.907103   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.407319   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.421763   73732 api_server.go:72] duration metric: took 2.015640262s to wait for apiserver process to appear ...
	I1105 19:10:53.421794   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:10:53.421816   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.752768   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.752803   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.752819   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.772365   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.772412   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.922705   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.928293   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:55.928329   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.422875   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.430633   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.430667   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.922156   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.934958   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.935016   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:57.422646   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:57.428784   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:10:57.435298   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:10:57.435319   73732 api_server.go:131] duration metric: took 4.013519207s to wait for apiserver health ...
	I1105 19:10:57.435327   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:57.435333   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:57.437061   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:10:57.438374   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:10:57.448509   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:10:57.465994   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:10:57.474649   73732 system_pods.go:59] 8 kube-system pods found
	I1105 19:10:57.474682   73732 system_pods.go:61] "coredns-7c65d6cfc9-nwzpq" [be8aa054-3f68-4c19-bae3-9d9cfcb51869] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:10:57.474691   73732 system_pods.go:61] "etcd-embed-certs-271881" [c37c829b-1dca-4659-b24c-4559304d9fe0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:10:57.474703   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [6df78e2a-1360-4c4b-b451-c96aa60f24ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:10:57.474710   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [95a6baca-c246-4043-acbc-235b076a89b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:10:57.474723   73732 system_pods.go:61] "kube-proxy-f945s" [2cb835f0-3727-4dd1-bd21-a21554ffdc0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 19:10:57.474730   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [53e044c5-199c-46f4-b3db-d3b65a8203aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:10:57.474741   73732 system_pods.go:61] "metrics-server-6867b74b74-vw2sm" [403d0c5f-d870-4f89-8caa-f5e9c8bf9ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:10:57.474748   73732 system_pods.go:61] "storage-provisioner" [13a89bf9-fb97-413a-9948-1c69780784cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 19:10:57.474758   73732 system_pods.go:74] duration metric: took 8.737357ms to wait for pod list to return data ...
	I1105 19:10:57.474769   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:10:57.480599   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:10:57.480623   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:10:57.480634   73732 node_conditions.go:105] duration metric: took 5.857622ms to run NodePressure ...
	I1105 19:10:57.480651   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:54.577390   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577939   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577969   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:54.577885   75165 retry.go:31] will retry after 3.393120773s: waiting for machine to come up
	I1105 19:10:57.971960   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972441   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:57.972370   75165 retry.go:31] will retry after 4.425954537s: waiting for machine to come up
	I1105 19:10:57.896717   73732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902115   73732 kubeadm.go:739] kubelet initialised
	I1105 19:10:57.902138   73732 kubeadm.go:740] duration metric: took 5.39576ms waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902152   73732 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:10:57.907293   73732 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:10:59.913946   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:02.414802   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:03.663928   74485 start.go:364] duration metric: took 3m10.909065205s to acquireMachinesLock for "old-k8s-version-567666"
	I1105 19:11:03.664023   74485 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:03.664038   74485 fix.go:54] fixHost starting: 
	I1105 19:11:03.664514   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:03.664569   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:03.682846   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I1105 19:11:03.683341   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:03.683786   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:11:03.683812   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:03.684219   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:03.684407   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:03.684552   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetState
	I1105 19:11:03.686262   74485 fix.go:112] recreateIfNeeded on old-k8s-version-567666: state=Stopped err=<nil>
	I1105 19:11:03.686295   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	W1105 19:11:03.686440   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:03.688047   74485 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-567666" ...
	I1105 19:11:02.401454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.401980   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Found IP for machine: 192.168.50.10
	I1105 19:11:02.402015   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has current primary IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.402025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserving static IP address...
	I1105 19:11:02.402384   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.402413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserved static IP address: 192.168.50.10
	I1105 19:11:02.402432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | skip adding static IP to network mk-default-k8s-diff-port-608095 - found existing host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"}
	I1105 19:11:02.402445   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for SSH to be available...
	I1105 19:11:02.402461   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Getting to WaitForSSH function...
	I1105 19:11:02.404454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404751   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.404778   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404915   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH client type: external
	I1105 19:11:02.404964   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa (-rw-------)
	I1105 19:11:02.405032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:02.405059   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | About to run SSH command:
	I1105 19:11:02.405072   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | exit 0
	I1105 19:11:02.526769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:02.527147   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetConfigRaw
	I1105 19:11:02.527756   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.530014   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530325   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.530357   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530527   74141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/config.json ...
	I1105 19:11:02.530708   74141 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:02.530728   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:02.530921   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.532868   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533184   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.533215   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533334   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.533493   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533630   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533761   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.533930   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.534116   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.534128   74141 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:02.631085   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:02.631114   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631351   74141 buildroot.go:166] provisioning hostname "default-k8s-diff-port-608095"
	I1105 19:11:02.631376   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631540   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.634037   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634371   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.634400   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634517   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.634691   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634849   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634995   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.635136   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.635310   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.635326   74141 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-608095 && echo "default-k8s-diff-port-608095" | sudo tee /etc/hostname
	I1105 19:11:02.744298   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-608095
	
	I1105 19:11:02.744327   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.747036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747348   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.747379   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747555   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.747716   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747846   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747940   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.748061   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.748266   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.748284   74141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-608095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-608095/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-608095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:02.850828   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:02.850854   74141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:02.850906   74141 buildroot.go:174] setting up certificates
	I1105 19:11:02.850923   74141 provision.go:84] configureAuth start
	I1105 19:11:02.850935   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.851260   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.853803   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854062   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.854088   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854203   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.856341   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856629   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.856659   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856747   74141 provision.go:143] copyHostCerts
	I1105 19:11:02.856804   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:02.856823   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:02.856874   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:02.856987   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:02.856997   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:02.857017   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:02.857075   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:02.857082   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:02.857100   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:02.857148   74141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-608095 san=[127.0.0.1 192.168.50.10 default-k8s-diff-port-608095 localhost minikube]
	I1105 19:11:03.048307   74141 provision.go:177] copyRemoteCerts
	I1105 19:11:03.048362   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:03.048386   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.050951   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051303   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.051353   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051556   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.051785   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.051953   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.052084   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.128441   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:03.150680   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1105 19:11:03.172480   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:03.194311   74141 provision.go:87] duration metric: took 343.374586ms to configureAuth
	I1105 19:11:03.194338   74141 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:03.194499   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:03.194560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.197209   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197585   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.197603   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197822   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.198006   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198168   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198336   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.198503   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.198686   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.198706   74141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:03.429895   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:03.429926   74141 machine.go:96] duration metric: took 899.201597ms to provisionDockerMachine
	I1105 19:11:03.429941   74141 start.go:293] postStartSetup for "default-k8s-diff-port-608095" (driver="kvm2")
	I1105 19:11:03.429955   74141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:03.429976   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.430329   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:03.430364   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.433455   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.433791   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.433820   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.434009   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.434323   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.434500   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.434659   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.514652   74141 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:03.518678   74141 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:03.518711   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:03.518774   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:03.518877   74141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:03.519014   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:03.528972   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:03.555892   74141 start.go:296] duration metric: took 125.936355ms for postStartSetup
	I1105 19:11:03.555939   74141 fix.go:56] duration metric: took 19.896481237s for fixHost
	I1105 19:11:03.555966   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.558764   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559153   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.559183   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559402   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.559610   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559788   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559933   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.560116   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.560292   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.560303   74141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:03.663723   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833863.637227261
	
	I1105 19:11:03.663751   74141 fix.go:216] guest clock: 1730833863.637227261
	I1105 19:11:03.663766   74141 fix.go:229] Guest: 2024-11-05 19:11:03.637227261 +0000 UTC Remote: 2024-11-05 19:11:03.555945261 +0000 UTC m=+239.048686257 (delta=81.282ms)
	I1105 19:11:03.663815   74141 fix.go:200] guest clock delta is within tolerance: 81.282ms
	I1105 19:11:03.663822   74141 start.go:83] releasing machines lock for "default-k8s-diff-port-608095", held for 20.004399519s
	I1105 19:11:03.663858   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.664158   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:03.666922   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667372   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.667408   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668101   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668297   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668412   74141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:03.668478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.668748   74141 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:03.668774   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.671463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671781   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.671810   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671903   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672175   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672333   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.672369   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.672417   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672578   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.672598   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672779   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.673106   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.777585   74141 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:03.783343   74141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:03.927951   74141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:03.933308   74141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:03.933380   74141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:03.948472   74141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:03.948499   74141 start.go:495] detecting cgroup driver to use...
	I1105 19:11:03.948572   74141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:03.963929   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:03.978578   74141 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:03.978643   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:03.992096   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:04.006036   74141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:04.114061   74141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:04.274136   74141 docker.go:233] disabling docker service ...
	I1105 19:11:04.274220   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:04.287806   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:04.300294   74141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:04.429899   74141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:04.576075   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:04.590934   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:04.611299   74141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:04.611375   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.623876   74141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:04.623949   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.634333   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.644768   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.654549   74141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:04.665001   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.675464   74141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.693845   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.703982   74141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:04.713758   74141 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:04.713820   74141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:04.727618   74141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:04.737679   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:04.866928   74141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:04.966529   74141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:04.966599   74141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:04.971536   74141 start.go:563] Will wait 60s for crictl version
	I1105 19:11:04.971602   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:11:04.975344   74141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:05.015910   74141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:05.015987   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.043577   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.072767   74141 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:03.689374   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .Start
	I1105 19:11:03.689560   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring networks are active...
	I1105 19:11:03.690290   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network default is active
	I1105 19:11:03.690659   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network mk-old-k8s-version-567666 is active
	I1105 19:11:03.691130   74485 main.go:141] libmachine: (old-k8s-version-567666) Getting domain xml...
	I1105 19:11:03.691890   74485 main.go:141] libmachine: (old-k8s-version-567666) Creating domain...
	I1105 19:11:05.006949   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting to get IP...
	I1105 19:11:05.008062   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.008547   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.008605   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.008523   75309 retry.go:31] will retry after 290.124771ms: waiting for machine to come up
	I1105 19:11:05.300185   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.300768   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.300803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.300717   75309 retry.go:31] will retry after 292.829683ms: waiting for machine to come up
	I1105 19:11:05.595365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.595881   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.595907   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.595831   75309 retry.go:31] will retry after 447.168257ms: waiting for machine to come up
	I1105 19:11:06.045320   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.045946   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.045976   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.045893   75309 retry.go:31] will retry after 420.272812ms: waiting for machine to come up
	I1105 19:11:06.467556   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.468012   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.468039   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.467962   75309 retry.go:31] will retry after 657.733497ms: waiting for machine to come up
	I1105 19:11:07.128022   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:07.128531   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:07.128559   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:07.128484   75309 retry.go:31] will retry after 922.664226ms: waiting for machine to come up
	I1105 19:11:04.416533   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:06.915445   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:07.417579   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:07.417610   73732 pod_ready.go:82] duration metric: took 9.510292246s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:07.417620   73732 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:05.073913   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:05.077086   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077430   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:05.077468   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077691   74141 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:05.081724   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:05.093668   74141 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:05.093785   74141 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:05.093853   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:05.128693   74141 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:05.128753   74141 ssh_runner.go:195] Run: which lz4
	I1105 19:11:05.133116   74141 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:05.137101   74141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:05.137126   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:11:06.379012   74141 crio.go:462] duration metric: took 1.245926141s to copy over tarball
	I1105 19:11:06.379088   74141 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:08.545369   74141 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.166238549s)
	I1105 19:11:08.545405   74141 crio.go:469] duration metric: took 2.166364449s to extract the tarball
	I1105 19:11:08.545422   74141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:08.581651   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:08.628768   74141 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:11:08.628795   74141 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:11:08.628805   74141 kubeadm.go:934] updating node { 192.168.50.10 8444 v1.31.2 crio true true} ...
	I1105 19:11:08.628937   74141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-608095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:08.629056   74141 ssh_runner.go:195] Run: crio config
	I1105 19:11:08.690112   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:08.690140   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:08.690152   74141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:08.690184   74141 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-608095 NodeName:default-k8s-diff-port-608095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:08.690346   74141 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-608095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:08.690415   74141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:08.700222   74141 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:08.700294   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:08.709542   74141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1105 19:11:08.725723   74141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:08.741985   74141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1105 19:11:08.758655   74141 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:08.762296   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:08.774119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:08.910000   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:08.926765   74141 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095 for IP: 192.168.50.10
	I1105 19:11:08.926788   74141 certs.go:194] generating shared ca certs ...
	I1105 19:11:08.926806   74141 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:08.927006   74141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:08.927069   74141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:08.927080   74141 certs.go:256] generating profile certs ...
	I1105 19:11:08.927157   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/client.key
	I1105 19:11:08.927229   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key.f2b96156
	I1105 19:11:08.927281   74141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key
	I1105 19:11:08.927456   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:08.927506   74141 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:08.927516   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:08.927549   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:08.927585   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:08.927620   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:08.927682   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:08.928417   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:08.971359   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:09.011632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:09.049748   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:09.078632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 19:11:09.105786   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:09.127855   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:09.151461   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:11:09.174068   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:09.196733   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:09.219111   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:09.241335   74141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:09.257040   74141 ssh_runner.go:195] Run: openssl version
	I1105 19:11:09.262371   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:09.272232   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276300   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276362   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.281747   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:09.291864   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:09.302012   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306085   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306142   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.311374   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:09.321334   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:09.331208   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335401   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335451   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.340595   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:09.350430   74141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:09.354622   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:09.360165   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:09.365624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:09.371545   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:09.377226   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:09.382624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:09.387929   74141 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:09.388032   74141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:09.388076   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.429707   74141 cri.go:89] found id: ""
	I1105 19:11:09.429783   74141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:09.440455   74141 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:09.440476   74141 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:09.440527   74141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:09.451745   74141 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:09.452609   74141 kubeconfig.go:125] found "default-k8s-diff-port-608095" server: "https://192.168.50.10:8444"
	I1105 19:11:09.454539   74141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:09.463900   74141 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.10
	I1105 19:11:09.463926   74141 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:09.463936   74141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:09.463987   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.497583   74141 cri.go:89] found id: ""
	I1105 19:11:09.497656   74141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:09.513767   74141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:09.523219   74141 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:09.523237   74141 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:09.523284   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1105 19:11:09.533116   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:09.533181   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:09.542453   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1105 19:11:08.053120   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:08.053610   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:08.053636   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:08.053587   75309 retry.go:31] will retry after 947.415519ms: waiting for machine to come up
	I1105 19:11:09.002803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:09.003423   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:09.003452   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:09.003363   75309 retry.go:31] will retry after 1.07978111s: waiting for machine to come up
	I1105 19:11:10.084404   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:10.084808   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:10.084830   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:10.084784   75309 retry.go:31] will retry after 1.482510322s: waiting for machine to come up
	I1105 19:11:11.568421   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:11.568840   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:11.568869   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:11.568791   75309 retry.go:31] will retry after 1.630983434s: waiting for machine to come up
	I1105 19:11:08.426308   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.426337   73732 pod_ready.go:82] duration metric: took 1.008708779s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.426350   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432238   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.432264   73732 pod_ready.go:82] duration metric: took 5.905051ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432276   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438187   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.438214   73732 pod_ready.go:82] duration metric: took 5.9294ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438226   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443794   73732 pod_ready.go:93] pod "kube-proxy-f945s" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.443823   73732 pod_ready.go:82] duration metric: took 5.587862ms for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443835   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:10.449498   73732 pod_ready.go:103] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:12.454934   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:12.454965   73732 pod_ready.go:82] duration metric: took 4.011121022s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:12.455003   73732 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:09.551174   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:09.551235   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:09.560481   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.571928   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:09.571997   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.583935   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1105 19:11:09.595336   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:09.595401   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:09.605061   74141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:09.613920   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:09.718759   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.680100   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.901034   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.951868   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.997866   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:10.997956   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.498113   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.998192   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.498517   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.998919   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:13.013078   74141 api_server.go:72] duration metric: took 2.01520799s to wait for apiserver process to appear ...
	I1105 19:11:13.013106   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:11:13.013136   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.042333   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.042388   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.042404   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.085574   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.085602   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.513733   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.518755   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:16.518789   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.013278   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.019214   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:17.019236   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.513886   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.519036   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:11:17.528970   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:11:17.529000   74141 api_server.go:131] duration metric: took 4.515887773s to wait for apiserver health ...
	I1105 19:11:17.529009   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:17.529016   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:17.530429   74141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:11:13.201891   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:13.202425   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:13.202453   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:13.202387   75309 retry.go:31] will retry after 2.689744765s: waiting for machine to come up
	I1105 19:11:15.893632   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:15.893989   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:15.894034   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:15.893964   75309 retry.go:31] will retry after 2.460566804s: waiting for machine to come up
	I1105 19:11:14.465748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:16.961287   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:17.531600   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:11:17.544876   74141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:11:17.567835   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:11:17.583925   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:11:17.583976   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:11:17.583988   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:11:17.583999   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:11:17.584015   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:11:17.584027   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:11:17.584041   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:11:17.584052   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:11:17.584060   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:11:17.584068   74141 system_pods.go:74] duration metric: took 16.206948ms to wait for pod list to return data ...
	I1105 19:11:17.584081   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:11:17.593935   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:11:17.593960   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:11:17.593971   74141 node_conditions.go:105] duration metric: took 9.883295ms to run NodePressure ...
	I1105 19:11:17.593988   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:17.929181   74141 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933853   74141 kubeadm.go:739] kubelet initialised
	I1105 19:11:17.933879   74141 kubeadm.go:740] duration metric: took 4.667992ms waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933888   74141 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:17.940560   74141 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.952799   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952832   74141 pod_ready.go:82] duration metric: took 12.240861ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.952845   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952856   74141 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.959079   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959105   74141 pod_ready.go:82] duration metric: took 6.23649ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.959119   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959130   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.963797   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963817   74141 pod_ready.go:82] duration metric: took 4.681011ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.963830   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963837   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.970915   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970935   74141 pod_ready.go:82] duration metric: took 7.091116ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.970945   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970951   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.371478   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371503   74141 pod_ready.go:82] duration metric: took 400.5454ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.371512   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371519   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.771731   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771768   74141 pod_ready.go:82] duration metric: took 400.239012ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.771783   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771792   74141 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:19.171239   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171271   74141 pod_ready.go:82] duration metric: took 399.46983ms for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:19.171286   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171296   74141 pod_ready.go:39] duration metric: took 1.237397637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:19.171315   74141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:11:19.185845   74141 ops.go:34] apiserver oom_adj: -16
	I1105 19:11:19.185869   74141 kubeadm.go:597] duration metric: took 9.745385943s to restartPrimaryControlPlane
	I1105 19:11:19.185880   74141 kubeadm.go:394] duration metric: took 9.797958845s to StartCluster
	I1105 19:11:19.185901   74141 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.185989   74141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:19.187722   74141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.187971   74141 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:11:19.188036   74141 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:11:19.188142   74141 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188160   74141 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-608095"
	I1105 19:11:19.188159   74141 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-608095"
	W1105 19:11:19.188171   74141 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:11:19.188199   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188236   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:19.188248   74141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-608095"
	I1105 19:11:19.188273   74141 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188310   74141 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.188323   74141 addons.go:243] addon metrics-server should already be in state true
	I1105 19:11:19.188379   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188526   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188569   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188674   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188725   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188802   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188823   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.189792   74141 out.go:177] * Verifying Kubernetes components...
	I1105 19:11:19.191119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:19.203875   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I1105 19:11:19.204313   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.204803   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.204830   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.205083   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I1105 19:11:19.205175   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.205432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.205488   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.205973   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.205999   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.206357   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.206916   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.206955   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.207292   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I1105 19:11:19.207671   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.208122   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.208146   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.208484   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.208861   74141 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.208882   74141 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:11:19.208909   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.209004   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209045   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.209234   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209273   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.223963   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I1105 19:11:19.224405   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.225044   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.225074   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.225460   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.226141   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I1105 19:11:19.226463   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.226509   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.226577   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.226757   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I1105 19:11:19.227058   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.227081   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.227475   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.227558   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.227797   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.228116   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.228136   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.228530   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.228755   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.229870   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.230471   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.232239   74141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:19.232263   74141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:11:19.233508   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:11:19.233527   74141 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:11:19.233548   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.233607   74141 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.233626   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:11:19.233647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.237337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237365   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237895   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237928   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237958   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237972   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.238155   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238270   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238440   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238623   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238681   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.239040   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.243685   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1105 19:11:19.244073   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.244584   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.244602   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.244951   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.245112   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.246617   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.246814   74141 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.246830   74141 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:11:19.246845   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.249467   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.249896   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.249925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.250139   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.250317   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.250466   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.250636   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.396917   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:19.412224   74141 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:19.541493   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.566934   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:11:19.566982   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:11:19.567627   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.607685   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:11:19.607717   74141 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:11:19.640921   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:19.640959   74141 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:11:19.674550   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:20.091222   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091248   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091528   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091583   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091596   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091605   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091807   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091868   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091853   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.105073   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.105093   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.105426   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.105442   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719139   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.151476995s)
	I1105 19:11:20.719187   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719194   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.044605505s)
	I1105 19:11:20.719236   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719256   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719511   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719582   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719593   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719596   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719631   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719580   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719643   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719654   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719670   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719680   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719897   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719946   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719948   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719903   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719982   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719990   74141 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-608095"
	I1105 19:11:20.719927   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.721843   74141 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1105 19:11:22.583507   73496 start.go:364] duration metric: took 54.335724939s to acquireMachinesLock for "no-preload-459223"
	I1105 19:11:22.583581   73496 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:22.583590   73496 fix.go:54] fixHost starting: 
	I1105 19:11:22.584018   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:22.584054   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:22.603921   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I1105 19:11:22.604367   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:22.604825   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:11:22.604845   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:22.605233   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:22.605408   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:22.605534   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:11:22.607289   73496 fix.go:112] recreateIfNeeded on no-preload-459223: state=Stopped err=<nil>
	I1105 19:11:22.607314   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	W1105 19:11:22.607458   73496 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:22.609455   73496 out.go:177] * Restarting existing kvm2 VM for "no-preload-459223" ...
	I1105 19:11:18.357643   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:18.358065   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:18.358099   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:18.358009   75309 retry.go:31] will retry after 3.036834524s: waiting for machine to come up
	I1105 19:11:21.398221   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398763   74485 main.go:141] libmachine: (old-k8s-version-567666) Found IP for machine: 192.168.61.125
	I1105 19:11:21.398825   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has current primary IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398843   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserving static IP address...
	I1105 19:11:21.399327   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.399350   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserved static IP address: 192.168.61.125
	I1105 19:11:21.399365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | skip adding static IP to network mk-old-k8s-version-567666 - found existing host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"}
	I1105 19:11:21.399379   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Getting to WaitForSSH function...
	I1105 19:11:21.399394   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting for SSH to be available...
	I1105 19:11:21.401270   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401664   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.401691   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401866   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH client type: external
	I1105 19:11:21.401897   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa (-rw-------)
	I1105 19:11:21.401935   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:21.401949   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | About to run SSH command:
	I1105 19:11:21.401959   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | exit 0
	I1105 19:11:21.527815   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:21.528165   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:11:21.528874   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.531373   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531647   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.531672   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531876   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:11:21.532071   74485 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:21.532092   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:21.532332   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.534177   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534431   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.534465   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534556   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.534716   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534845   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534960   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.535142   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.535329   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.535341   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:21.643321   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:21.643354   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643618   74485 buildroot.go:166] provisioning hostname "old-k8s-version-567666"
	I1105 19:11:21.643646   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643812   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.646230   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646628   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.646666   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.647037   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647167   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647290   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.647421   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.647579   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.647592   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-567666 && echo "old-k8s-version-567666" | sudo tee /etc/hostname
	I1105 19:11:21.770209   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-567666
	
	I1105 19:11:21.770255   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.772932   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773314   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.773346   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773484   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.773691   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773950   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.774121   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.774357   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.774386   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-567666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-567666/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-567666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:21.890834   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:21.890860   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:21.890915   74485 buildroot.go:174] setting up certificates
	I1105 19:11:21.890929   74485 provision.go:84] configureAuth start
	I1105 19:11:21.890944   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.891224   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.893835   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894256   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.894285   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.896436   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896699   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.896715   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896893   74485 provision.go:143] copyHostCerts
	I1105 19:11:21.896951   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:21.896967   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:21.897037   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:21.897163   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:21.897176   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:21.897205   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:21.897279   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:21.897289   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:21.897315   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:21.897396   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-567666 san=[127.0.0.1 192.168.61.125 localhost minikube old-k8s-version-567666]
	I1105 19:11:21.962153   74485 provision.go:177] copyRemoteCerts
	I1105 19:11:21.962219   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:21.962257   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.964765   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965125   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.965166   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965330   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.965478   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.965603   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.965746   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.048519   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:22.072975   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1105 19:11:22.098263   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:22.120258   74485 provision.go:87] duration metric: took 229.316972ms to configureAuth
	I1105 19:11:22.120285   74485 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:22.120444   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:11:22.120516   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.123859   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124309   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.124344   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124536   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.124737   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.124922   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.125055   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.125213   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.125375   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.125388   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:22.349922   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:22.349964   74485 machine.go:96] duration metric: took 817.87332ms to provisionDockerMachine
	I1105 19:11:22.349979   74485 start.go:293] postStartSetup for "old-k8s-version-567666" (driver="kvm2")
	I1105 19:11:22.349992   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:22.350014   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.350350   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:22.350385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.352922   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353310   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.353332   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353459   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.353638   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.353807   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.353921   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.437482   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:22.441617   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:22.441646   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:22.441711   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:22.441807   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:22.441929   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:22.451016   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:22.474199   74485 start.go:296] duration metric: took 124.207336ms for postStartSetup
	I1105 19:11:22.474233   74485 fix.go:56] duration metric: took 18.810197154s for fixHost
	I1105 19:11:22.474269   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.476786   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477119   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.477157   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477279   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.477471   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477621   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477753   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.477910   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.478070   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.478081   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:22.583343   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833882.558222038
	
	I1105 19:11:22.583363   74485 fix.go:216] guest clock: 1730833882.558222038
	I1105 19:11:22.583372   74485 fix.go:229] Guest: 2024-11-05 19:11:22.558222038 +0000 UTC Remote: 2024-11-05 19:11:22.474236871 +0000 UTC m=+209.862783450 (delta=83.985167ms)
	I1105 19:11:22.583418   74485 fix.go:200] guest clock delta is within tolerance: 83.985167ms
	I1105 19:11:22.583429   74485 start.go:83] releasing machines lock for "old-k8s-version-567666", held for 18.919444623s
	I1105 19:11:22.583460   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.583717   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:22.586183   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586479   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.586509   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586687   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587137   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587310   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587400   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:22.587448   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.587521   74485 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:22.587548   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.590145   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590474   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.590507   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590530   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590655   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.590831   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.590995   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.591010   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591037   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.591179   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.591286   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.591438   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.591558   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591702   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:19.461723   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:21.962582   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:22.702707   74485 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:22.708965   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:22.856764   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:22.863791   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:22.863866   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:22.883997   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:22.884022   74485 start.go:495] detecting cgroup driver to use...
	I1105 19:11:22.884094   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:22.901499   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:22.919358   74485 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:22.919422   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:22.936964   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:22.953538   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:23.077720   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:23.218316   74485 docker.go:233] disabling docker service ...
	I1105 19:11:23.218390   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:23.238316   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:23.251814   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:23.427386   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:23.552928   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:23.567149   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:23.587241   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1105 19:11:23.587307   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.597558   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:23.597620   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.607466   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.616794   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.626425   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:23.637121   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:23.649243   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:23.649305   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:23.664648   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:23.675060   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:23.812636   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:23.903326   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:23.903404   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:23.908377   74485 start.go:563] Will wait 60s for crictl version
	I1105 19:11:23.908434   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:23.912163   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:23.961712   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:23.961794   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:23.992951   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:24.032041   74485 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1105 19:11:20.723316   74141 addons.go:510] duration metric: took 1.53528546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1105 19:11:21.416385   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:23.416458   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:22.610737   73496 main.go:141] libmachine: (no-preload-459223) Calling .Start
	I1105 19:11:22.610910   73496 main.go:141] libmachine: (no-preload-459223) Ensuring networks are active...
	I1105 19:11:22.611680   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network default is active
	I1105 19:11:22.612057   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network mk-no-preload-459223 is active
	I1105 19:11:22.612426   73496 main.go:141] libmachine: (no-preload-459223) Getting domain xml...
	I1105 19:11:22.613081   73496 main.go:141] libmachine: (no-preload-459223) Creating domain...
	I1105 19:11:24.013821   73496 main.go:141] libmachine: (no-preload-459223) Waiting to get IP...
	I1105 19:11:24.014922   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.015467   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.015561   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.015439   75501 retry.go:31] will retry after 233.461829ms: waiting for machine to come up
	I1105 19:11:24.251339   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.252673   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.252799   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.252760   75501 retry.go:31] will retry after 276.401207ms: waiting for machine to come up
	I1105 19:11:24.531408   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.531964   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.531987   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.531909   75501 retry.go:31] will retry after 367.69826ms: waiting for machine to come up
	I1105 19:11:24.901179   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.901579   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.901608   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.901536   75501 retry.go:31] will retry after 602.654501ms: waiting for machine to come up
	I1105 19:11:25.505889   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:25.506403   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:25.506426   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:25.506364   75501 retry.go:31] will retry after 492.077165ms: waiting for machine to come up
	I1105 19:11:24.033400   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:24.036549   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037128   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:24.037165   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037346   74485 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:24.042641   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:24.055174   74485 kubeadm.go:883] updating cluster {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:24.055327   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:11:24.055388   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:24.101655   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:24.101724   74485 ssh_runner.go:195] Run: which lz4
	I1105 19:11:24.105618   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:24.109705   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:24.109735   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1105 19:11:25.602158   74485 crio.go:462] duration metric: took 1.496564307s to copy over tarball
	I1105 19:11:25.602236   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:23.963218   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:26.461963   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:25.419351   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:26.916693   74141 node_ready.go:49] node "default-k8s-diff-port-608095" has status "Ready":"True"
	I1105 19:11:26.916731   74141 node_ready.go:38] duration metric: took 7.50447744s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:26.916744   74141 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:26.922179   74141 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927845   74141 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.927879   74141 pod_ready.go:82] duration metric: took 5.666725ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927892   74141 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932723   74141 pod_ready.go:93] pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.932752   74141 pod_ready.go:82] duration metric: took 4.843531ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932761   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937108   74141 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.937137   74141 pod_ready.go:82] duration metric: took 4.368536ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937152   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.941970   74141 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.941995   74141 pod_ready.go:82] duration metric: took 4.833418ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.942008   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317480   74141 pod_ready.go:93] pod "kube-proxy-8v42c" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.317505   74141 pod_ready.go:82] duration metric: took 375.489077ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317517   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717923   74141 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.717945   74141 pod_ready.go:82] duration metric: took 400.42059ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717956   74141 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.000041   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.000558   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.000613   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.000525   75501 retry.go:31] will retry after 920.198126ms: waiting for machine to come up
	I1105 19:11:26.922134   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.922917   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.922951   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.922858   75501 retry.go:31] will retry after 1.071853506s: waiting for machine to come up
	I1105 19:11:27.996574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:27.996995   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:27.997020   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:27.996949   75501 retry.go:31] will retry after 1.283200825s: waiting for machine to come up
	I1105 19:11:29.282457   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:29.282942   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:29.282979   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:29.282903   75501 retry.go:31] will retry after 1.512809658s: waiting for machine to come up
	I1105 19:11:28.701223   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.098952901s)
	I1105 19:11:28.701253   74485 crio.go:469] duration metric: took 3.099065633s to extract the tarball
	I1105 19:11:28.701263   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:28.744214   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:28.778845   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:28.778868   74485 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:28.778962   74485 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:28.778945   74485 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.779024   74485 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.779039   74485 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.778939   74485 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.779067   74485 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.779083   74485 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.778957   74485 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781024   74485 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781003   74485 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.781052   74485 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.781002   74485 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.781088   74485 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.781114   74485 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.013637   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1105 19:11:29.043928   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.043936   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.044140   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.045892   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.046313   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.055792   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.081724   74485 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1105 19:11:29.081779   74485 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1105 19:11:29.081826   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.234925   74485 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1105 19:11:29.234966   74485 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.235046   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235079   74485 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1105 19:11:29.235112   74485 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.235136   74485 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1105 19:11:29.235152   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235167   74485 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.235200   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235238   74485 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1105 19:11:29.235277   74485 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.235298   74485 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1105 19:11:29.235320   74485 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.235333   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235352   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235351   74485 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1105 19:11:29.235385   74485 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.235415   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235426   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.251873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.251960   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.251985   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.252000   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.371298   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.415548   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.415592   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.415654   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.415710   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.415791   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.415868   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.466873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.544593   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.544660   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.586695   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.586714   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.586812   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.586916   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.606582   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1105 19:11:29.707767   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1105 19:11:29.707803   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1105 19:11:29.716195   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1105 19:11:29.723097   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1105 19:11:30.039971   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:30.182760   74485 cache_images.go:92] duration metric: took 1.403874987s to LoadCachedImages
	W1105 19:11:30.182890   74485 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1105 19:11:30.182912   74485 kubeadm.go:934] updating node { 192.168.61.125 8443 v1.20.0 crio true true} ...
	I1105 19:11:30.183052   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-567666 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:30.183146   74485 ssh_runner.go:195] Run: crio config
	I1105 19:11:30.235206   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:11:30.235241   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:30.235253   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:30.235277   74485 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-567666 NodeName:old-k8s-version-567666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1105 19:11:30.235433   74485 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-567666"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:30.235503   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1105 19:11:30.245189   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:30.245263   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:30.254772   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1105 19:11:30.271711   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:30.288568   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1105 19:11:30.309098   74485 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:30.313211   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:30.325637   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:30.447346   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:30.466863   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666 for IP: 192.168.61.125
	I1105 19:11:30.466884   74485 certs.go:194] generating shared ca certs ...
	I1105 19:11:30.466898   74485 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:30.467086   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:30.467152   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:30.467165   74485 certs.go:256] generating profile certs ...
	I1105 19:11:30.467322   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key
	I1105 19:11:30.467398   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8
	I1105 19:11:30.467448   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key
	I1105 19:11:30.467614   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:30.467656   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:30.467676   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:30.467722   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:30.467759   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:30.467788   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:30.467847   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:30.468756   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:30.532325   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:30.559936   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:30.592995   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:30.632421   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 19:11:30.662285   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:11:30.696292   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:30.725642   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:30.750231   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:30.773213   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:30.796269   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:30.820261   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:30.837059   74485 ssh_runner.go:195] Run: openssl version
	I1105 19:11:30.842937   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:30.855033   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859637   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859720   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.865747   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:30.877678   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:30.890762   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895576   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895642   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.901686   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:30.912689   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:30.923800   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928911   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928984   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.934782   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:30.947059   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:30.951934   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:30.958065   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:30.965341   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:30.971725   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:30.977606   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:30.983486   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:30.989212   74485 kubeadm.go:392] StartCluster: {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:30.989350   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:30.989411   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.031794   74485 cri.go:89] found id: ""
	I1105 19:11:31.031884   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:31.043178   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:31.043202   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:31.043291   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:31.054102   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:31.055256   74485 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:31.055924   74485 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-567666" cluster setting kubeconfig missing "old-k8s-version-567666" context setting]
	I1105 19:11:31.056913   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:31.064220   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:31.074582   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.125
	I1105 19:11:31.074618   74485 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:31.074628   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:31.074706   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.111157   74485 cri.go:89] found id: ""
	I1105 19:11:31.111241   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:31.130027   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:31.139917   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:31.139939   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:31.140007   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:31.150790   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:31.150868   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:31.161397   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:31.170394   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:31.170462   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:31.179594   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.188892   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:31.188952   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.199840   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:31.209166   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:31.209244   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:31.219687   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:31.231079   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:31.350667   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.094565   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.334807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.457538   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.534503   74485 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:32.534596   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:28.464017   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.962422   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:29.725325   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:32.225372   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.796963   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:30.797438   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:30.797489   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:30.797407   75501 retry.go:31] will retry after 1.774832047s: waiting for machine to come up
	I1105 19:11:32.574423   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:32.575000   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:32.575047   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:32.574929   75501 retry.go:31] will retry after 2.041093372s: waiting for machine to come up
	I1105 19:11:34.618469   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:34.618954   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:34.619015   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:34.618915   75501 retry.go:31] will retry after 2.731949113s: waiting for machine to come up
	I1105 19:11:33.034690   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:33.535594   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.035526   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.534836   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.034947   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.535108   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.035417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.535438   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.034766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.535415   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:32.962469   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.963093   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.461010   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.724484   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.224511   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.352209   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:37.352752   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:37.352783   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:37.352686   75501 retry.go:31] will retry after 3.62202055s: waiting for machine to come up
	I1105 19:11:38.035553   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:38.534702   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.035332   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.534749   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.034989   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.535354   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.035624   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.534847   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.035293   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.535363   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.465635   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:41.961348   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:40.978791   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979231   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has current primary IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979249   73496 main.go:141] libmachine: (no-preload-459223) Found IP for machine: 192.168.72.101
	I1105 19:11:40.979258   73496 main.go:141] libmachine: (no-preload-459223) Reserving static IP address...
	I1105 19:11:40.979621   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.979650   73496 main.go:141] libmachine: (no-preload-459223) Reserved static IP address: 192.168.72.101
	I1105 19:11:40.979669   73496 main.go:141] libmachine: (no-preload-459223) DBG | skip adding static IP to network mk-no-preload-459223 - found existing host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"}
	I1105 19:11:40.979682   73496 main.go:141] libmachine: (no-preload-459223) Waiting for SSH to be available...
	I1105 19:11:40.979710   73496 main.go:141] libmachine: (no-preload-459223) DBG | Getting to WaitForSSH function...
	I1105 19:11:40.981725   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.982063   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982202   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH client type: external
	I1105 19:11:40.982227   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa (-rw-------)
	I1105 19:11:40.982258   73496 main.go:141] libmachine: (no-preload-459223) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:40.982286   73496 main.go:141] libmachine: (no-preload-459223) DBG | About to run SSH command:
	I1105 19:11:40.982310   73496 main.go:141] libmachine: (no-preload-459223) DBG | exit 0
	I1105 19:11:41.111259   73496 main.go:141] libmachine: (no-preload-459223) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:41.111639   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetConfigRaw
	I1105 19:11:41.112368   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.114811   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115215   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.115244   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115499   73496 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/config.json ...
	I1105 19:11:41.115687   73496 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:41.115705   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:41.115900   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.118059   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118481   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.118505   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118659   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.118833   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.118959   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.119078   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.119222   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.119426   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.119442   73496 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:41.235030   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:41.235060   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235270   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:11:41.235294   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235480   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.237980   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238288   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.238327   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238405   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.238567   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238687   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238805   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.238938   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.239150   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.239163   73496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-459223 && echo "no-preload-459223" | sudo tee /etc/hostname
	I1105 19:11:41.366664   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-459223
	
	I1105 19:11:41.366693   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.369672   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.369979   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.370006   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.370147   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.370335   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370661   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.370830   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.371067   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.371086   73496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-459223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-459223/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-459223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:41.495741   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:41.495774   73496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:41.495796   73496 buildroot.go:174] setting up certificates
	I1105 19:11:41.495804   73496 provision.go:84] configureAuth start
	I1105 19:11:41.495816   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.496076   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.498948   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499377   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.499409   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499552   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.501842   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502168   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.502198   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502367   73496 provision.go:143] copyHostCerts
	I1105 19:11:41.502428   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:41.502445   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:41.502516   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:41.502662   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:41.502674   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:41.502706   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:41.502814   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:41.502825   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:41.502853   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:41.502934   73496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.no-preload-459223 san=[127.0.0.1 192.168.72.101 localhost minikube no-preload-459223]
	I1105 19:11:41.648058   73496 provision.go:177] copyRemoteCerts
	I1105 19:11:41.648115   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:41.648137   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.650915   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651274   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.651306   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.651707   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.651878   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.652032   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:41.736549   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1105 19:11:41.759352   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:41.782205   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:41.804725   73496 provision.go:87] duration metric: took 308.906806ms to configureAuth
	I1105 19:11:41.804755   73496 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:41.804930   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:41.805011   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.807634   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.808071   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.808498   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808657   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808792   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.808960   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.809113   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.809125   73496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:42.033406   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:42.033449   73496 machine.go:96] duration metric: took 917.749182ms to provisionDockerMachine
	I1105 19:11:42.033462   73496 start.go:293] postStartSetup for "no-preload-459223" (driver="kvm2")
	I1105 19:11:42.033475   73496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:42.033506   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.033853   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:42.033883   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.037259   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037688   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.037722   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037869   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.038063   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.038231   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.038361   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.126624   73496 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:42.130761   73496 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:42.130794   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:42.130881   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:42.131006   73496 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:42.131120   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:42.140978   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:42.163880   73496 start.go:296] duration metric: took 130.405487ms for postStartSetup
	I1105 19:11:42.163933   73496 fix.go:56] duration metric: took 19.580327925s for fixHost
	I1105 19:11:42.163953   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.166648   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.166994   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.167025   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.167196   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.167394   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167565   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167705   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.167856   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:42.168016   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:42.168025   73496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:42.279303   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833902.251467447
	
	I1105 19:11:42.279336   73496 fix.go:216] guest clock: 1730833902.251467447
	I1105 19:11:42.279351   73496 fix.go:229] Guest: 2024-11-05 19:11:42.251467447 +0000 UTC Remote: 2024-11-05 19:11:42.163937292 +0000 UTC m=+356.505256250 (delta=87.530155ms)
	I1105 19:11:42.279378   73496 fix.go:200] guest clock delta is within tolerance: 87.530155ms
	I1105 19:11:42.279387   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 19.695831159s
	I1105 19:11:42.279417   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.279660   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:42.282462   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.282828   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.282871   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.283018   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283439   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283580   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283669   73496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:42.283716   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.283811   73496 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:42.283838   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.286528   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286754   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286891   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.286917   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287097   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.287112   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287124   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287313   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287495   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287510   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287666   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287664   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.287769   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.398511   73496 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:42.404337   73496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:42.550196   73496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:42.555775   73496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:42.555853   73496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:42.571003   73496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:42.571031   73496 start.go:495] detecting cgroup driver to use...
	I1105 19:11:42.571123   73496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:42.586390   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:42.599887   73496 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:42.599944   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:42.613260   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:42.626371   73496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:42.736949   73496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:42.898897   73496 docker.go:233] disabling docker service ...
	I1105 19:11:42.898965   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:42.912534   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:42.925075   73496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:43.043425   73496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:43.175468   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:43.190803   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:43.210413   73496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:43.210496   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.221971   73496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:43.222064   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.232251   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.241540   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.251131   73496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:43.261218   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.270932   73496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.287905   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.297730   73496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:43.307263   73496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:43.307319   73496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:43.319421   73496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:43.328415   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:43.445798   73496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:43.532190   73496 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:43.532284   73496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:43.536931   73496 start.go:563] Will wait 60s for crictl version
	I1105 19:11:43.536986   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.540525   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:43.576428   73496 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:43.576540   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.603034   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.631229   73496 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:39.724162   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:42.224141   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:44.224609   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:43.632482   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:43.634912   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635227   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:43.635260   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635530   73496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:43.639287   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:43.650818   73496 kubeadm.go:883] updating cluster {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:43.650963   73496 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:43.651042   73496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:43.685392   73496 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:43.685421   73496 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:43.685492   73496 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.685500   73496 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.685517   73496 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.685547   73496 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.685506   73496 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.685569   73496 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.685558   73496 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.685623   73496 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.686958   73496 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.686979   73496 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.686976   73496 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.687017   73496 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.687030   73496 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.687057   73496 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1105 19:11:43.898928   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.914069   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1105 19:11:43.934388   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.940664   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.947392   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.951614   73496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1105 19:11:43.951652   73496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.951686   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.957000   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.045057   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.075256   73496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1105 19:11:44.075289   73496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1105 19:11:44.075304   73496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.075310   73496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075357   73496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1105 19:11:44.075388   73496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075417   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.075481   73496 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1105 19:11:44.075431   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075511   73496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.075543   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.102803   73496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1105 19:11:44.102856   73496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.102916   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.133582   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.133640   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.133655   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.133707   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.188042   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.188058   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.272464   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.272500   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.272467   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.272531   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.289003   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.289126   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.411162   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1105 19:11:44.411248   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.411307   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1105 19:11:44.411326   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:44.411361   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1105 19:11:44.411394   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:44.411432   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478064   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1105 19:11:44.478093   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478132   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1105 19:11:44.478152   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478178   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1105 19:11:44.478195   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1105 19:11:44.478211   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1105 19:11:44.478226   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:44.478249   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1105 19:11:44.478257   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:44.478324   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:44.889847   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.035199   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.534769   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.035551   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.535664   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.035103   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.535581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.035077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.535660   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.035462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.534898   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.962742   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.462884   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.724058   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:48.727054   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.976315   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.498135546s)
	I1105 19:11:46.976348   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1105 19:11:46.976361   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.498084867s)
	I1105 19:11:46.976386   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.498096252s)
	I1105 19:11:46.976392   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.498054417s)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1105 19:11:46.976395   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1105 19:11:46.976368   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976436   73496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.086553002s)
	I1105 19:11:46.976471   73496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1105 19:11:46.976488   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976506   73496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:46.976551   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:49.054369   73496 ssh_runner.go:235] Completed: which crictl: (2.077794607s)
	I1105 19:11:49.054455   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:49.054480   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.077976168s)
	I1105 19:11:49.054497   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1105 19:11:49.054520   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.054551   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.089648   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.509600   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455021031s)
	I1105 19:11:50.509639   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1105 19:11:50.509664   73496 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509679   73496 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.419997127s)
	I1105 19:11:50.509719   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509751   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.547301   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1105 19:11:50.547416   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:48.035320   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.535496   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.035636   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.535445   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.035499   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.535722   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.035700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.535310   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.035585   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.535468   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.962134   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.463479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.225155   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:53.723881   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:54.139987   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.592545704s)
	I1105 19:11:54.140021   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1105 19:11:54.140038   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.630297093s)
	I1105 19:11:54.140058   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1105 19:11:54.140089   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:54.140150   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:53.034919   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.535697   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.035353   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.534669   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.034957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.534747   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.035331   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.534699   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.465549   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.961291   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.725153   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:58.224417   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.887208   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.747032149s)
	I1105 19:11:55.887247   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1105 19:11:55.887278   73496 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:55.887331   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:57.753834   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.866475995s)
	I1105 19:11:57.753860   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1105 19:11:57.753879   73496 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:57.753917   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:58.605444   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1105 19:11:58.605490   73496 cache_images.go:123] Successfully loaded all cached images
	I1105 19:11:58.605498   73496 cache_images.go:92] duration metric: took 14.920064519s to LoadCachedImages
	I1105 19:11:58.605512   73496 kubeadm.go:934] updating node { 192.168.72.101 8443 v1.31.2 crio true true} ...
	I1105 19:11:58.605627   73496 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-459223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:58.605719   73496 ssh_runner.go:195] Run: crio config
	I1105 19:11:58.654396   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:11:58.654422   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:58.654432   73496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:58.654456   73496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.101 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-459223 NodeName:no-preload-459223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:58.654636   73496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-459223"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.101"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.101"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:58.654714   73496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:58.666580   73496 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:58.666659   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:58.676390   73496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:11:58.692426   73496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:58.708650   73496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1105 19:11:58.727451   73496 ssh_runner.go:195] Run: grep 192.168.72.101	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:58.731200   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:58.743437   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:58.850614   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:58.867662   73496 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223 for IP: 192.168.72.101
	I1105 19:11:58.867694   73496 certs.go:194] generating shared ca certs ...
	I1105 19:11:58.867715   73496 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:58.867896   73496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:58.867954   73496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:58.867988   73496 certs.go:256] generating profile certs ...
	I1105 19:11:58.868073   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/client.key
	I1105 19:11:58.868129   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key.0f61fe1e
	I1105 19:11:58.868163   73496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key
	I1105 19:11:58.868276   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:58.868316   73496 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:58.868323   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:58.868347   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:58.868380   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:58.868409   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:58.868450   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:58.869179   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:58.911433   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:58.947863   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:58.977511   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:59.022637   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 19:11:59.060992   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:59.086516   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:59.109616   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:59.135019   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:59.159832   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:59.184470   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:59.207138   73496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:59.224379   73496 ssh_runner.go:195] Run: openssl version
	I1105 19:11:59.230142   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:59.243624   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248086   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248157   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.253684   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:59.264169   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:59.274837   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279102   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279159   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.284540   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:59.295198   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:59.306105   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310073   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310115   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.315240   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:59.325470   73496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:59.329485   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:59.334985   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:59.340316   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:59.345717   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:59.351082   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:59.356631   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:59.361951   73496 kubeadm.go:392] StartCluster: {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:59.362047   73496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:59.362084   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.398746   73496 cri.go:89] found id: ""
	I1105 19:11:59.398819   73496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:59.408597   73496 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:59.408614   73496 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:59.408656   73496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:59.418082   73496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:59.419128   73496 kubeconfig.go:125] found "no-preload-459223" server: "https://192.168.72.101:8443"
	I1105 19:11:59.421286   73496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:59.430458   73496 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.101
	I1105 19:11:59.430490   73496 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:59.430500   73496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:59.430549   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.464047   73496 cri.go:89] found id: ""
	I1105 19:11:59.464102   73496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:59.480978   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:59.490808   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:59.490829   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:59.490871   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:59.499505   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:59.499559   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:59.508247   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:59.516942   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:59.517005   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:59.525910   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.534349   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:59.534392   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.544212   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:59.553794   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:59.553857   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:59.562739   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:59.571819   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:59.680938   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.564659   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:58.034948   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:58.534748   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.034961   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.535634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.035311   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.534756   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.035266   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.535256   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.035489   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.534701   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.963075   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.462112   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.224544   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:02.225623   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.226711   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.775338   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.844402   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.957534   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:12:00.957630   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.458375   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.958215   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.975834   73496 api_server.go:72] duration metric: took 1.018298528s to wait for apiserver process to appear ...
	I1105 19:12:01.975862   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:12:01.975884   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.774116   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.774149   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.774164   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.825378   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.825427   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.976663   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.984209   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:04.984244   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.476825   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.484608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.484644   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.975985   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.981608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.981639   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:06.476014   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:06.480296   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:12:06.487584   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:12:06.487613   73496 api_server.go:131] duration metric: took 4.511744097s to wait for apiserver health ...
	I1105 19:12:06.487623   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:12:06.487632   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:12:06.489302   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:12:03.034795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:03.534764   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.034833   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.534795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.034815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.534885   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.535327   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.035253   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.535011   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.961693   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.962003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:07.461125   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.724362   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:09.224191   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.490496   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:12:06.500809   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:12:06.529242   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:12:06.542769   73496 system_pods.go:59] 8 kube-system pods found
	I1105 19:12:06.542806   73496 system_pods.go:61] "coredns-7c65d6cfc9-9vvhj" [fde1a6e7-6807-440c-a38d-4f39ede6c11e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:12:06.542818   73496 system_pods.go:61] "etcd-no-preload-459223" [398e3fc3-6902-4cbb-bc50-a72bab461839] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:12:06.542828   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [33a306b0-a41d-4ca3-9d01-69faa7825fe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:12:06.542837   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [865ae24c-d991-4650-9e17-7242f84403e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:12:06.542844   73496 system_pods.go:61] "kube-proxy-6h584" [dd35774f-a245-42af-8fe9-bd6933ad0e30] Running
	I1105 19:12:06.542852   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [27d3685e-d548-49b6-a24d-02b1f8656c66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:12:06.542859   73496 system_pods.go:61] "metrics-server-6867b74b74-5sp2j" [7ddaa66e-b4ba-4241-8dba-5fc6ab66d777] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:12:06.542864   73496 system_pods.go:61] "storage-provisioner" [49786ba3-e9fc-45ad-9418-fd3a0a7b652c] Running
	I1105 19:12:06.542873   73496 system_pods.go:74] duration metric: took 13.603868ms to wait for pod list to return data ...
	I1105 19:12:06.542883   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:12:06.549398   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:12:06.549425   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:12:06.549435   73496 node_conditions.go:105] duration metric: took 6.546615ms to run NodePressure ...
	I1105 19:12:06.549452   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:06.812829   73496 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818052   73496 kubeadm.go:739] kubelet initialised
	I1105 19:12:06.818082   73496 kubeadm.go:740] duration metric: took 5.227942ms waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818093   73496 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:12:06.823883   73496 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.830129   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830164   73496 pod_ready.go:82] duration metric: took 6.253499ms for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.830176   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830187   73496 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.834901   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834942   73496 pod_ready.go:82] duration metric: took 4.743456ms for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.834954   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834988   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.841446   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841474   73496 pod_ready.go:82] duration metric: took 6.472942ms for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.841485   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841494   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.933972   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.933998   73496 pod_ready.go:82] duration metric: took 92.493084ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.934006   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.934012   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333443   73496 pod_ready.go:93] pod "kube-proxy-6h584" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:07.333473   73496 pod_ready.go:82] duration metric: took 399.45278ms for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333486   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:09.339907   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:08.035104   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:08.534784   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.035198   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.535319   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.035258   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.534634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.035604   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.535077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.035096   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.961614   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.962113   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.724418   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.724954   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.839467   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.839725   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.035100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:13.534793   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.035120   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.535318   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.035062   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.535127   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.034840   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.534830   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.035105   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.534928   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.961398   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.224300   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.729666   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.339542   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:17.840399   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:17.840424   73496 pod_ready.go:82] duration metric: took 10.506929493s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:17.840433   73496 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:19.846676   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.035126   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:18.535446   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.035154   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.535413   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.035580   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.534802   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.035030   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.535250   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.034785   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.534700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.460480   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.461609   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.223496   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.224908   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.847279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:24.347279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.034721   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.534672   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.035358   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.534813   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.535342   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.034934   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.534766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.035389   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.534831   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.961556   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.460682   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:25.723807   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:27.724515   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.346351   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:28.035226   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:28.535577   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.034984   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.535633   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.035509   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.534907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.535421   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.034719   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.534952   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:32.535067   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:32.575052   74485 cri.go:89] found id: ""
	I1105 19:12:32.575085   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.575096   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:32.575104   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:32.575164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:32.609969   74485 cri.go:89] found id: ""
	I1105 19:12:32.610003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.610011   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:32.610017   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:32.610065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:32.642343   74485 cri.go:89] found id: ""
	I1105 19:12:32.642369   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.642376   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:32.642381   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:32.642426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:28.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:30.960340   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.725101   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.224788   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:31.346559   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:33.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.680144   74485 cri.go:89] found id: ""
	I1105 19:12:32.680177   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.680188   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:32.680196   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:32.680270   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:32.715216   74485 cri.go:89] found id: ""
	I1105 19:12:32.715248   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.715259   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:32.715267   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:32.715321   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:32.751742   74485 cri.go:89] found id: ""
	I1105 19:12:32.751771   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.751795   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:32.751803   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:32.751865   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:32.786944   74485 cri.go:89] found id: ""
	I1105 19:12:32.787003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.787015   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:32.787023   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:32.787080   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:32.820523   74485 cri.go:89] found id: ""
	I1105 19:12:32.820550   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.820557   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:32.820565   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:32.820575   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:32.873960   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:32.874000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:32.889268   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:32.889296   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:33.011825   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:33.011846   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:33.011862   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:33.082785   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:33.082827   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:35.630678   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:35.644410   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:35.644492   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:35.679567   74485 cri.go:89] found id: ""
	I1105 19:12:35.679598   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.679607   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:35.679613   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:35.679666   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:35.713685   74485 cri.go:89] found id: ""
	I1105 19:12:35.713713   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.713721   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:35.713726   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:35.713789   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:35.749496   74485 cri.go:89] found id: ""
	I1105 19:12:35.749525   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.749536   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:35.749543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:35.749611   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:35.784228   74485 cri.go:89] found id: ""
	I1105 19:12:35.784254   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.784263   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:35.784269   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:35.784317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:35.818620   74485 cri.go:89] found id: ""
	I1105 19:12:35.818680   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.818696   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:35.818703   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:35.818769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:35.852525   74485 cri.go:89] found id: ""
	I1105 19:12:35.852554   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.852566   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:35.852574   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:35.852648   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:35.887906   74485 cri.go:89] found id: ""
	I1105 19:12:35.887931   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.887939   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:35.887944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:35.887994   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:35.920566   74485 cri.go:89] found id: ""
	I1105 19:12:35.920594   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.920602   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:35.920612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:35.920627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:35.972706   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:35.972742   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:35.986114   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:35.986141   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:36.067016   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:36.067044   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:36.067060   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:36.158947   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:36.159003   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:32.962679   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.461449   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:37.462001   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:34.724028   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:36.724174   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.728373   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.848563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.347478   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:40.347899   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.700738   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:38.713280   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:38.713351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:38.747293   74485 cri.go:89] found id: ""
	I1105 19:12:38.747335   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.747347   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:38.747355   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:38.747414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:38.781607   74485 cri.go:89] found id: ""
	I1105 19:12:38.781635   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.781643   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:38.781648   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:38.781703   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:38.815303   74485 cri.go:89] found id: ""
	I1105 19:12:38.815333   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.815342   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:38.815348   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:38.815397   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:38.850128   74485 cri.go:89] found id: ""
	I1105 19:12:38.850156   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.850166   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:38.850174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:38.850233   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:38.882470   74485 cri.go:89] found id: ""
	I1105 19:12:38.882493   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.882500   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:38.882506   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:38.882563   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:38.914669   74485 cri.go:89] found id: ""
	I1105 19:12:38.914698   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.914706   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:38.914713   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:38.914762   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:38.946521   74485 cri.go:89] found id: ""
	I1105 19:12:38.946548   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.946556   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:38.946561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:38.946613   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:38.979628   74485 cri.go:89] found id: ""
	I1105 19:12:38.979655   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.979663   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:38.979672   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:38.979682   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:39.056066   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:39.056102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.092303   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:39.092333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:39.143754   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:39.143790   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:39.156553   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:39.156587   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:39.220882   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:41.721766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:41.734823   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:41.734893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:41.768636   74485 cri.go:89] found id: ""
	I1105 19:12:41.768668   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.768685   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:41.768693   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:41.768750   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:41.809506   74485 cri.go:89] found id: ""
	I1105 19:12:41.809533   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.809541   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:41.809546   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:41.809606   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:41.849953   74485 cri.go:89] found id: ""
	I1105 19:12:41.849977   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.849985   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:41.849991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:41.850037   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:41.893042   74485 cri.go:89] found id: ""
	I1105 19:12:41.893072   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.893084   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:41.893091   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:41.893152   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:41.936259   74485 cri.go:89] found id: ""
	I1105 19:12:41.936282   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.936292   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:41.936298   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:41.936347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:41.970322   74485 cri.go:89] found id: ""
	I1105 19:12:41.970344   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.970353   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:41.970360   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:41.970427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:42.004351   74485 cri.go:89] found id: ""
	I1105 19:12:42.004375   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.004383   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:42.004388   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:42.004443   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:42.035136   74485 cri.go:89] found id: ""
	I1105 19:12:42.035163   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.035174   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:42.035185   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:42.035201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:42.086760   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:42.086801   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:42.100795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:42.100829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:42.167480   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:42.167509   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:42.167529   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:42.248625   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:42.248664   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.961606   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.461423   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:41.224956   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:43.724906   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.846509   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.847235   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.785100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:44.798182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:44.798248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:44.834080   74485 cri.go:89] found id: ""
	I1105 19:12:44.834107   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.834115   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:44.834120   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:44.834179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:44.870572   74485 cri.go:89] found id: ""
	I1105 19:12:44.870602   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.870613   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:44.870620   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:44.870691   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:44.908960   74485 cri.go:89] found id: ""
	I1105 19:12:44.908991   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.909002   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:44.909010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:44.909075   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:44.945310   74485 cri.go:89] found id: ""
	I1105 19:12:44.945342   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.945350   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:44.945355   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:44.945409   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:44.982893   74485 cri.go:89] found id: ""
	I1105 19:12:44.982935   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.982946   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:44.982953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:44.983030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:45.015529   74485 cri.go:89] found id: ""
	I1105 19:12:45.015559   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.015571   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:45.015578   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:45.015640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:45.047252   74485 cri.go:89] found id: ""
	I1105 19:12:45.047284   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.047295   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:45.047302   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:45.047364   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:45.082963   74485 cri.go:89] found id: ""
	I1105 19:12:45.083009   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.083018   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:45.083026   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:45.083039   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:45.131844   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:45.131881   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:45.145500   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:45.145530   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:45.214668   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:45.214709   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:45.214725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:45.291203   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:45.291243   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:44.963672   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.461610   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:46.223849   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:48.225352   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.346007   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:49.346691   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.831908   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:47.844873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:47.844957   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:47.881587   74485 cri.go:89] found id: ""
	I1105 19:12:47.881617   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.881628   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:47.881644   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:47.881714   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:47.918381   74485 cri.go:89] found id: ""
	I1105 19:12:47.918411   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.918423   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:47.918430   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:47.918491   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:47.950835   74485 cri.go:89] found id: ""
	I1105 19:12:47.950864   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.950880   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:47.950889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:47.950947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:47.985234   74485 cri.go:89] found id: ""
	I1105 19:12:47.985261   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.985272   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:47.985279   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:47.985338   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:48.019406   74485 cri.go:89] found id: ""
	I1105 19:12:48.019437   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.019448   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:48.019455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:48.019532   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:48.053126   74485 cri.go:89] found id: ""
	I1105 19:12:48.053160   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.053172   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:48.053180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:48.053241   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:48.086847   74485 cri.go:89] found id: ""
	I1105 19:12:48.086872   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.086879   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:48.086885   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:48.086944   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:48.122366   74485 cri.go:89] found id: ""
	I1105 19:12:48.122388   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.122396   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:48.122404   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:48.122421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:48.171579   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:48.171622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:48.185207   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:48.185234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:48.249553   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:48.249575   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:48.249586   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:48.323391   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:48.323427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:50.861939   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:50.874943   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:50.875041   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:50.911498   74485 cri.go:89] found id: ""
	I1105 19:12:50.911522   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.911530   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:50.911536   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:50.911591   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:50.946936   74485 cri.go:89] found id: ""
	I1105 19:12:50.946962   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.946988   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:50.947034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:50.947098   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:50.983220   74485 cri.go:89] found id: ""
	I1105 19:12:50.983246   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.983258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:50.983265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:50.983314   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:51.017052   74485 cri.go:89] found id: ""
	I1105 19:12:51.017078   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.017086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:51.017092   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:51.017141   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:51.051417   74485 cri.go:89] found id: ""
	I1105 19:12:51.051448   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.051459   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:51.051466   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:51.051529   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:51.085129   74485 cri.go:89] found id: ""
	I1105 19:12:51.085164   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.085177   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:51.085182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:51.085232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:51.122065   74485 cri.go:89] found id: ""
	I1105 19:12:51.122100   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.122113   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:51.122120   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:51.122178   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:51.154909   74485 cri.go:89] found id: ""
	I1105 19:12:51.154938   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.154946   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:51.154954   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:51.154966   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:51.167768   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:51.167798   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:51.231849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:51.231873   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:51.231897   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:51.314426   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:51.314487   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:51.356654   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:51.356685   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:49.961294   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.461707   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:50.723534   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.723821   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:51.347677   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.847328   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.911774   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:53.924884   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:53.924968   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:53.957690   74485 cri.go:89] found id: ""
	I1105 19:12:53.957719   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.957729   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:53.957737   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:53.957802   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:53.990717   74485 cri.go:89] found id: ""
	I1105 19:12:53.990744   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.990751   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:53.990757   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:53.990803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:54.023229   74485 cri.go:89] found id: ""
	I1105 19:12:54.023251   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.023258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:54.023263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:54.023320   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:54.056950   74485 cri.go:89] found id: ""
	I1105 19:12:54.056977   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.056987   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:54.056995   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:54.057056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:54.091729   74485 cri.go:89] found id: ""
	I1105 19:12:54.091756   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.091768   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:54.091776   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:54.091828   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:54.123964   74485 cri.go:89] found id: ""
	I1105 19:12:54.123991   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.124001   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:54.124009   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:54.124070   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:54.155164   74485 cri.go:89] found id: ""
	I1105 19:12:54.155195   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.155204   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:54.155209   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:54.155268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:54.188161   74485 cri.go:89] found id: ""
	I1105 19:12:54.188191   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.188202   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:54.188213   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:54.188226   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:54.240906   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:54.240941   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:54.254061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:54.254093   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:54.321973   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:54.322007   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:54.322026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:54.405106   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:54.405147   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:56.941801   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:56.954658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:56.954741   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:56.990372   74485 cri.go:89] found id: ""
	I1105 19:12:56.990400   74485 logs.go:282] 0 containers: []
	W1105 19:12:56.990411   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:56.990419   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:56.990479   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:57.023047   74485 cri.go:89] found id: ""
	I1105 19:12:57.023082   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.023093   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:57.023102   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:57.023163   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:57.054991   74485 cri.go:89] found id: ""
	I1105 19:12:57.055021   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.055030   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:57.055036   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:57.055094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:57.086182   74485 cri.go:89] found id: ""
	I1105 19:12:57.086214   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.086225   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:57.086233   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:57.086295   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:57.120322   74485 cri.go:89] found id: ""
	I1105 19:12:57.120350   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.120361   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:57.120368   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:57.120431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:57.153751   74485 cri.go:89] found id: ""
	I1105 19:12:57.153781   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.153790   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:57.153796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:57.153845   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:57.189208   74485 cri.go:89] found id: ""
	I1105 19:12:57.189234   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.189244   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:57.189251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:57.189317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:57.223259   74485 cri.go:89] found id: ""
	I1105 19:12:57.223292   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.223301   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:57.223308   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:57.223320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:57.273063   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:57.273098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:57.287759   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:57.287783   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:57.353387   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:57.353409   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:57.353421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:57.426374   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:57.426411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:54.462191   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.960479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:54.723926   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.724988   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.224704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:55.847609   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:58.347062   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.348243   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.965907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:59.979081   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:59.979149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:00.010955   74485 cri.go:89] found id: ""
	I1105 19:13:00.011001   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.011012   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:00.011021   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:00.011081   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:00.044800   74485 cri.go:89] found id: ""
	I1105 19:13:00.044825   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.044832   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:00.044838   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:00.044894   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:00.082999   74485 cri.go:89] found id: ""
	I1105 19:13:00.083040   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.083050   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:00.083059   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:00.083125   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:00.120792   74485 cri.go:89] found id: ""
	I1105 19:13:00.120826   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.120835   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:00.120840   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:00.120903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:00.153156   74485 cri.go:89] found id: ""
	I1105 19:13:00.153188   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.153200   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:00.153207   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:00.153273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:00.189039   74485 cri.go:89] found id: ""
	I1105 19:13:00.189066   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.189073   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:00.189079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:00.189143   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:00.220904   74485 cri.go:89] found id: ""
	I1105 19:13:00.220932   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.220942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:00.220950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:00.221012   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:00.255414   74485 cri.go:89] found id: ""
	I1105 19:13:00.255443   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.255454   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:00.255464   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:00.255480   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:00.329027   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:00.329050   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:00.329061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:00.405813   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:00.405847   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:00.443302   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:00.443332   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:00.498413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:00.498452   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:58.960870   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.962098   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:01.723865   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.724945   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:02.846369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:04.846751   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.011897   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:03.025351   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:03.025419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:03.058881   74485 cri.go:89] found id: ""
	I1105 19:13:03.058910   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.058920   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:03.058928   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:03.059018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:03.093549   74485 cri.go:89] found id: ""
	I1105 19:13:03.093580   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.093592   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:03.093600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:03.093660   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:03.132355   74485 cri.go:89] found id: ""
	I1105 19:13:03.132384   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.132395   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:03.132402   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:03.132463   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:03.164832   74485 cri.go:89] found id: ""
	I1105 19:13:03.164864   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.164875   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:03.164888   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:03.164947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:03.203187   74485 cri.go:89] found id: ""
	I1105 19:13:03.203213   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.203221   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:03.203226   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:03.203282   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:03.238867   74485 cri.go:89] found id: ""
	I1105 19:13:03.238899   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.238921   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:03.238928   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:03.239010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:03.276139   74485 cri.go:89] found id: ""
	I1105 19:13:03.276174   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.276187   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:03.276195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:03.276251   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:03.312588   74485 cri.go:89] found id: ""
	I1105 19:13:03.312613   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.312631   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:03.312639   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:03.312650   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:03.379754   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:03.379782   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:03.379797   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:03.455719   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:03.455754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.493428   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:03.493458   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:03.545447   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:03.545481   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.060213   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:06.074756   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:06.074831   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:06.111392   74485 cri.go:89] found id: ""
	I1105 19:13:06.111421   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.111429   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:06.111435   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:06.111493   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:06.147535   74485 cri.go:89] found id: ""
	I1105 19:13:06.147568   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.147579   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:06.147585   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:06.147646   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:06.183176   74485 cri.go:89] found id: ""
	I1105 19:13:06.183198   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.183205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:06.183211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:06.183262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:06.213957   74485 cri.go:89] found id: ""
	I1105 19:13:06.213983   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.213992   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:06.213997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:06.214060   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:06.251199   74485 cri.go:89] found id: ""
	I1105 19:13:06.251227   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.251234   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:06.251240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:06.251297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:06.288128   74485 cri.go:89] found id: ""
	I1105 19:13:06.288157   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.288167   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:06.288174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:06.288236   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:06.325265   74485 cri.go:89] found id: ""
	I1105 19:13:06.325296   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.325306   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:06.325314   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:06.325375   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:06.359649   74485 cri.go:89] found id: ""
	I1105 19:13:06.359689   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.359700   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:06.359710   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:06.359725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:06.408423   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:06.408456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.421776   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:06.421804   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:06.487464   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:06.487493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:06.487507   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:06.565789   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:06.565829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.461192   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.725002   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:08.225146   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:07.346498   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.347264   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.104578   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:09.117930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:09.118022   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:09.156055   74485 cri.go:89] found id: ""
	I1105 19:13:09.156083   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.156093   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:09.156101   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:09.156161   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:09.190470   74485 cri.go:89] found id: ""
	I1105 19:13:09.190499   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.190509   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:09.190516   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:09.190576   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:09.222568   74485 cri.go:89] found id: ""
	I1105 19:13:09.222595   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.222606   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:09.222612   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:09.222677   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:09.260251   74485 cri.go:89] found id: ""
	I1105 19:13:09.260282   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.260292   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:09.260300   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:09.260362   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:09.296006   74485 cri.go:89] found id: ""
	I1105 19:13:09.296036   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.296047   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:09.296054   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:09.296118   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:09.331213   74485 cri.go:89] found id: ""
	I1105 19:13:09.331246   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.331257   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:09.331265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:09.331333   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:09.364286   74485 cri.go:89] found id: ""
	I1105 19:13:09.364316   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.364327   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:09.364335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:09.364445   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:09.398060   74485 cri.go:89] found id: ""
	I1105 19:13:09.398084   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.398092   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:09.398101   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:09.398113   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:09.447373   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:09.447409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:09.461483   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:09.461514   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:09.528213   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:09.528236   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:09.528248   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:09.607397   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:09.607430   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.146158   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:12.159183   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:12.159262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:12.193917   74485 cri.go:89] found id: ""
	I1105 19:13:12.193952   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.193963   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:12.193971   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:12.194036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:12.226558   74485 cri.go:89] found id: ""
	I1105 19:13:12.226585   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.226594   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:12.226600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:12.226662   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:12.258437   74485 cri.go:89] found id: ""
	I1105 19:13:12.258469   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.258481   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:12.258488   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:12.258557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:12.291308   74485 cri.go:89] found id: ""
	I1105 19:13:12.291341   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.291353   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:12.291361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:12.291431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:12.325768   74485 cri.go:89] found id: ""
	I1105 19:13:12.325801   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.325812   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:12.325819   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:12.325884   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:12.361077   74485 cri.go:89] found id: ""
	I1105 19:13:12.361100   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.361108   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:12.361118   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:12.361179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:12.394769   74485 cri.go:89] found id: ""
	I1105 19:13:12.394791   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.394800   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:12.394806   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:12.394864   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:12.430138   74485 cri.go:89] found id: ""
	I1105 19:13:12.430167   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.430177   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:12.430189   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:12.430200   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.472596   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:12.472637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:12.523107   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:12.523143   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:12.535797   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:12.535824   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:12.604088   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:12.604108   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:12.604123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:08.460647   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.462830   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.225468   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.225693   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:11.849320   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.347487   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:15.185725   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:15.200158   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:15.200238   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:15.238309   74485 cri.go:89] found id: ""
	I1105 19:13:15.238334   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.238342   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:15.238349   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:15.238404   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:15.272897   74485 cri.go:89] found id: ""
	I1105 19:13:15.272927   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.272938   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:15.272945   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:15.273013   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:15.307700   74485 cri.go:89] found id: ""
	I1105 19:13:15.307726   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.307737   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:15.307744   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:15.307810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:15.340156   74485 cri.go:89] found id: ""
	I1105 19:13:15.340182   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.340196   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:15.340202   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:15.340252   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:15.375930   74485 cri.go:89] found id: ""
	I1105 19:13:15.375963   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.375971   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:15.375976   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:15.376031   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:15.409876   74485 cri.go:89] found id: ""
	I1105 19:13:15.409905   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.409915   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:15.409922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:15.409984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:15.442781   74485 cri.go:89] found id: ""
	I1105 19:13:15.442808   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.442819   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:15.442825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:15.442896   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:15.480578   74485 cri.go:89] found id: ""
	I1105 19:13:15.480606   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.480614   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:15.480623   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:15.480634   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:15.530910   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:15.530952   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:15.544351   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:15.544382   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:15.618345   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:15.618373   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:15.618396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:15.704408   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:15.704451   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:14.961408   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.961486   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.724130   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.724204   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.724704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.347818   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.846423   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.244882   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:18.258667   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:18.258758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:18.292140   74485 cri.go:89] found id: ""
	I1105 19:13:18.292163   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.292171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:18.292178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:18.292235   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:18.324954   74485 cri.go:89] found id: ""
	I1105 19:13:18.324979   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.324985   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:18.324991   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:18.325048   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:18.361943   74485 cri.go:89] found id: ""
	I1105 19:13:18.361972   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.361983   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:18.361991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:18.362062   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:18.396012   74485 cri.go:89] found id: ""
	I1105 19:13:18.396036   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.396044   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:18.396050   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:18.396097   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:18.428852   74485 cri.go:89] found id: ""
	I1105 19:13:18.428875   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.428883   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:18.428889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:18.428946   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:18.464364   74485 cri.go:89] found id: ""
	I1105 19:13:18.464390   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.464397   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:18.464404   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:18.464464   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:18.496478   74485 cri.go:89] found id: ""
	I1105 19:13:18.496505   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.496514   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:18.496519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:18.496577   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:18.530313   74485 cri.go:89] found id: ""
	I1105 19:13:18.530339   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.530348   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:18.530356   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:18.530368   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:18.582593   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:18.582627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:18.596580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:18.596616   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:18.663920   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:18.663959   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:18.663974   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:18.740706   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:18.740746   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.281614   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:21.295841   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:21.295919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:21.330832   74485 cri.go:89] found id: ""
	I1105 19:13:21.330856   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.330864   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:21.330869   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:21.330922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:21.365228   74485 cri.go:89] found id: ""
	I1105 19:13:21.365257   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.365265   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:21.365269   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:21.365317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:21.418675   74485 cri.go:89] found id: ""
	I1105 19:13:21.418702   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.418719   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:21.418727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:21.418793   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:21.453966   74485 cri.go:89] found id: ""
	I1105 19:13:21.453994   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.454003   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:21.454008   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:21.454058   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:21.492030   74485 cri.go:89] found id: ""
	I1105 19:13:21.492056   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.492067   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:21.492078   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:21.492128   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:21.529146   74485 cri.go:89] found id: ""
	I1105 19:13:21.529174   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.529183   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:21.529190   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:21.529250   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:21.566491   74485 cri.go:89] found id: ""
	I1105 19:13:21.566519   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.566528   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:21.566533   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:21.566595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:21.605720   74485 cri.go:89] found id: ""
	I1105 19:13:21.605745   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.605754   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:21.605762   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:21.605772   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:21.682385   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:21.682408   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:21.682420   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:21.764519   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:21.764557   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.805090   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:21.805117   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:21.857560   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:21.857593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:19.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.961995   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.224702   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.226864   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:20.850915   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.346819   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.347230   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:24.371420   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:24.384566   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:24.384634   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:24.416283   74485 cri.go:89] found id: ""
	I1105 19:13:24.416308   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.416319   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:24.416327   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:24.416388   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:24.452875   74485 cri.go:89] found id: ""
	I1105 19:13:24.452899   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.452907   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:24.452913   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:24.452964   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:24.489946   74485 cri.go:89] found id: ""
	I1105 19:13:24.489974   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.489992   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:24.490000   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:24.490056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:24.527348   74485 cri.go:89] found id: ""
	I1105 19:13:24.527377   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.527388   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:24.527395   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:24.527451   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:24.558992   74485 cri.go:89] found id: ""
	I1105 19:13:24.559024   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.559035   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:24.559047   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:24.559105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:24.591405   74485 cri.go:89] found id: ""
	I1105 19:13:24.591437   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.591448   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:24.591455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:24.591516   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.625002   74485 cri.go:89] found id: ""
	I1105 19:13:24.625031   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.625040   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:24.625048   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:24.625114   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:24.657867   74485 cri.go:89] found id: ""
	I1105 19:13:24.657896   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.657907   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:24.657918   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:24.657931   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:24.708444   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:24.708482   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:24.721771   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:24.721814   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:24.793946   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:24.793980   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:24.793996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:24.875130   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:24.875167   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:27.412872   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:27.426996   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:27.427072   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:27.462434   74485 cri.go:89] found id: ""
	I1105 19:13:27.462458   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.462468   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:27.462475   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:27.462536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:27.496916   74485 cri.go:89] found id: ""
	I1105 19:13:27.496951   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.496962   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:27.496969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:27.497035   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:27.528826   74485 cri.go:89] found id: ""
	I1105 19:13:27.528853   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.528861   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:27.528867   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:27.528919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:27.563164   74485 cri.go:89] found id: ""
	I1105 19:13:27.563193   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.563204   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:27.563210   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:27.563284   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:27.600136   74485 cri.go:89] found id: ""
	I1105 19:13:27.600164   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.600174   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:27.600180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:27.600247   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:27.634326   74485 cri.go:89] found id: ""
	I1105 19:13:27.634358   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.634368   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:27.634377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:27.634452   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.462295   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:26.961567   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.723935   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.725498   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.847362   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.349542   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.668154   74485 cri.go:89] found id: ""
	I1105 19:13:27.668185   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.668196   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:27.668203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:27.668263   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:27.706016   74485 cri.go:89] found id: ""
	I1105 19:13:27.706043   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.706051   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:27.706059   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:27.706071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:27.755890   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:27.755929   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:27.773038   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:27.773063   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:27.863392   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:27.863414   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:27.863429   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:27.949149   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:27.949185   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.489333   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:30.502794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:30.502878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:30.536263   74485 cri.go:89] found id: ""
	I1105 19:13:30.536289   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.536297   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:30.536302   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:30.536347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:30.570418   74485 cri.go:89] found id: ""
	I1105 19:13:30.570445   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.570455   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:30.570462   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:30.570523   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:30.601972   74485 cri.go:89] found id: ""
	I1105 19:13:30.602003   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.602013   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:30.602020   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:30.602086   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:30.634151   74485 cri.go:89] found id: ""
	I1105 19:13:30.634183   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.634195   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:30.634203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:30.634265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:30.666384   74485 cri.go:89] found id: ""
	I1105 19:13:30.666415   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.666425   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:30.666433   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:30.666498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:30.699587   74485 cri.go:89] found id: ""
	I1105 19:13:30.699619   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.699631   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:30.699639   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:30.699699   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:30.731917   74485 cri.go:89] found id: ""
	I1105 19:13:30.731972   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.731983   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:30.731990   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:30.732051   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:30.768807   74485 cri.go:89] found id: ""
	I1105 19:13:30.768832   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.768840   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:30.768849   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:30.768860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:30.848594   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:30.848626   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.889031   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:30.889067   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:30.940550   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:30.940588   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:30.953810   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:30.953845   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:31.023633   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:29.461686   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:31.961484   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.225024   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.723965   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.847298   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:35.347135   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:33.524150   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:33.539025   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:33.539112   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:33.584756   74485 cri.go:89] found id: ""
	I1105 19:13:33.584786   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.584799   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:33.584807   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:33.584869   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:33.624785   74485 cri.go:89] found id: ""
	I1105 19:13:33.624816   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.624829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:33.624836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:33.625025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:33.668750   74485 cri.go:89] found id: ""
	I1105 19:13:33.668783   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.668794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:33.668804   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:33.668867   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:33.701675   74485 cri.go:89] found id: ""
	I1105 19:13:33.701707   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.701735   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:33.701743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:33.701817   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:33.737368   74485 cri.go:89] found id: ""
	I1105 19:13:33.737393   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.737401   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:33.737407   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:33.737458   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:33.770589   74485 cri.go:89] found id: ""
	I1105 19:13:33.770620   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.770630   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:33.770638   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:33.770704   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:33.802635   74485 cri.go:89] found id: ""
	I1105 19:13:33.802668   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.802680   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:33.802687   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:33.802751   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:33.839274   74485 cri.go:89] found id: ""
	I1105 19:13:33.839301   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.839309   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:33.839317   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:33.839328   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:33.881049   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:33.881090   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:33.932704   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:33.932743   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:33.945979   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:33.946007   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:34.017355   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:34.017375   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:34.017390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:36.596284   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:36.608240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:36.608306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:36.641846   74485 cri.go:89] found id: ""
	I1105 19:13:36.641878   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.641887   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:36.641901   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:36.641966   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:36.676553   74485 cri.go:89] found id: ""
	I1105 19:13:36.676584   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.676595   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:36.676602   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:36.676669   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:36.711931   74485 cri.go:89] found id: ""
	I1105 19:13:36.711961   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.711972   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:36.711980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:36.712042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:36.748510   74485 cri.go:89] found id: ""
	I1105 19:13:36.748534   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.748542   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:36.748547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:36.748596   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:36.781869   74485 cri.go:89] found id: ""
	I1105 19:13:36.781899   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.781912   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:36.781922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:36.781983   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:36.816574   74485 cri.go:89] found id: ""
	I1105 19:13:36.816597   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.816605   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:36.816610   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:36.816658   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:36.852894   74485 cri.go:89] found id: ""
	I1105 19:13:36.852921   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.852928   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:36.852934   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:36.852996   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:36.891732   74485 cri.go:89] found id: ""
	I1105 19:13:36.891764   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.891783   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:36.891795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:36.891810   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:36.964948   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:36.964972   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:36.964987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:37.043727   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:37.043765   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:37.084306   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:37.084333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:37.133238   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:37.133274   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:34.461773   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:36.960440   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:34.724805   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.224830   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.227912   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.347383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.347770   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.647492   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:39.659944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:39.660025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:39.695382   74485 cri.go:89] found id: ""
	I1105 19:13:39.695405   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.695415   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:39.695422   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:39.695480   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:39.731807   74485 cri.go:89] found id: ""
	I1105 19:13:39.731833   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.731841   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:39.731846   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:39.731895   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:39.766913   74485 cri.go:89] found id: ""
	I1105 19:13:39.766945   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.766955   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:39.766963   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:39.767049   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:39.800265   74485 cri.go:89] found id: ""
	I1105 19:13:39.800288   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.800296   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:39.800301   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:39.800346   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:39.832753   74485 cri.go:89] found id: ""
	I1105 19:13:39.832781   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.832789   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:39.832794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:39.832843   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:39.865950   74485 cri.go:89] found id: ""
	I1105 19:13:39.865980   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.865990   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:39.865997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:39.866046   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:39.902918   74485 cri.go:89] found id: ""
	I1105 19:13:39.902948   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.902957   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:39.902962   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:39.903039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:39.935086   74485 cri.go:89] found id: ""
	I1105 19:13:39.935117   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.935129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:39.935139   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:39.935152   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:39.997935   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:39.997961   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:39.997976   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:40.076794   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:40.076852   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:40.114178   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:40.114209   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:40.163512   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:40.163550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:38.961003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:40.962241   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.724237   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:43.725317   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.847149   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:44.346097   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:42.676843   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:42.689855   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:42.689930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:42.724108   74485 cri.go:89] found id: ""
	I1105 19:13:42.724139   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.724148   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:42.724156   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:42.724218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:42.760816   74485 cri.go:89] found id: ""
	I1105 19:13:42.760844   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.760854   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:42.760861   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:42.760924   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:42.795111   74485 cri.go:89] found id: ""
	I1105 19:13:42.795134   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.795142   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:42.795147   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:42.795195   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:42.832964   74485 cri.go:89] found id: ""
	I1105 19:13:42.832988   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.832997   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:42.833003   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:42.833065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:42.868817   74485 cri.go:89] found id: ""
	I1105 19:13:42.868848   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.868858   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:42.868865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:42.868933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:42.902015   74485 cri.go:89] found id: ""
	I1105 19:13:42.902044   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.902051   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:42.902056   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:42.902146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:42.934298   74485 cri.go:89] found id: ""
	I1105 19:13:42.934322   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.934330   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:42.934335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:42.934385   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:42.969804   74485 cri.go:89] found id: ""
	I1105 19:13:42.969831   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.969843   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:42.969854   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:42.969873   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:43.019922   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:43.019959   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:43.033594   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:43.033622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:43.108220   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:43.108240   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:43.108251   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:43.191946   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:43.191987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:45.730728   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:45.743344   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:45.743419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:45.777693   74485 cri.go:89] found id: ""
	I1105 19:13:45.777728   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.777739   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:45.777747   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:45.777810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:45.810195   74485 cri.go:89] found id: ""
	I1105 19:13:45.810222   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.810233   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:45.810240   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:45.810308   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:45.851210   74485 cri.go:89] found id: ""
	I1105 19:13:45.851240   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.851247   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:45.851252   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:45.851311   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:45.885501   74485 cri.go:89] found id: ""
	I1105 19:13:45.885531   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.885540   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:45.885546   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:45.885595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:45.921638   74485 cri.go:89] found id: ""
	I1105 19:13:45.921667   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.921676   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:45.921684   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:45.921745   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:45.954341   74485 cri.go:89] found id: ""
	I1105 19:13:45.954373   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.954384   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:45.954394   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:45.954461   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:45.988840   74485 cri.go:89] found id: ""
	I1105 19:13:45.988865   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.988873   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:45.988879   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:45.988949   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:46.025409   74485 cri.go:89] found id: ""
	I1105 19:13:46.025441   74485 logs.go:282] 0 containers: []
	W1105 19:13:46.025458   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:46.025470   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:46.025486   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:46.037763   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:46.037787   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:46.112619   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:46.112663   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:46.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:46.192165   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:46.192199   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:46.233235   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:46.233263   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:42.962569   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:45.461256   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:47.461781   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.225004   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.723774   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.346687   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.787685   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:48.800681   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:48.800749   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:48.835344   74485 cri.go:89] found id: ""
	I1105 19:13:48.835366   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.835374   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:48.835383   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:48.835429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:48.867447   74485 cri.go:89] found id: ""
	I1105 19:13:48.867474   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.867483   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:48.867488   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:48.867536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:48.899135   74485 cri.go:89] found id: ""
	I1105 19:13:48.899160   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.899167   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:48.899172   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:48.899221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:48.932208   74485 cri.go:89] found id: ""
	I1105 19:13:48.932243   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.932255   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:48.932263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:48.932326   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:48.967174   74485 cri.go:89] found id: ""
	I1105 19:13:48.967202   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.967210   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:48.967215   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:48.967267   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:48.998902   74485 cri.go:89] found id: ""
	I1105 19:13:48.998932   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.998942   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:48.998950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:48.999030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:49.030946   74485 cri.go:89] found id: ""
	I1105 19:13:49.030988   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.030999   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:49.031006   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:49.031074   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:49.063489   74485 cri.go:89] found id: ""
	I1105 19:13:49.063517   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.063528   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:49.063540   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:49.063555   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:49.116433   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:49.116477   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:49.131439   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:49.131476   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:49.199770   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:49.199795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:49.199809   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:49.275503   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:49.275543   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:51.816208   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:51.829328   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:51.829399   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:51.863320   74485 cri.go:89] found id: ""
	I1105 19:13:51.863346   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.863354   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:51.863359   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:51.863406   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:51.896589   74485 cri.go:89] found id: ""
	I1105 19:13:51.896618   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.896628   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:51.896635   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:51.896697   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:51.933744   74485 cri.go:89] found id: ""
	I1105 19:13:51.933769   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.933776   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:51.933781   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:51.933829   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:51.970806   74485 cri.go:89] found id: ""
	I1105 19:13:51.970829   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.970836   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:51.970842   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:51.970889   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:52.004087   74485 cri.go:89] found id: ""
	I1105 19:13:52.004116   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.004124   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:52.004129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:52.004186   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:52.041721   74485 cri.go:89] found id: ""
	I1105 19:13:52.041752   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.041763   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:52.041771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:52.041835   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:52.079253   74485 cri.go:89] found id: ""
	I1105 19:13:52.079277   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.079285   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:52.079292   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:52.079351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:52.112604   74485 cri.go:89] found id: ""
	I1105 19:13:52.112642   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.112653   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:52.112664   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:52.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:52.160799   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:52.160841   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:52.174323   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:52.174355   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:52.247358   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:52.247383   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:52.247395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:52.326071   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:52.326108   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:49.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.461239   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.724514   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.724742   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.848418   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:53.346329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.347199   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:54.866454   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:54.879015   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:54.879093   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:54.911729   74485 cri.go:89] found id: ""
	I1105 19:13:54.911765   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.911777   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:54.911785   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:54.911846   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:54.943137   74485 cri.go:89] found id: ""
	I1105 19:13:54.943169   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.943185   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:54.943193   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:54.943253   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:54.977951   74485 cri.go:89] found id: ""
	I1105 19:13:54.977980   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.977991   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:54.977998   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:54.978061   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:55.009453   74485 cri.go:89] found id: ""
	I1105 19:13:55.009478   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.009486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:55.009491   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:55.009537   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:55.040790   74485 cri.go:89] found id: ""
	I1105 19:13:55.040814   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.040821   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:55.040827   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:55.040878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:55.073401   74485 cri.go:89] found id: ""
	I1105 19:13:55.073430   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.073441   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:55.073449   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:55.073508   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:55.105419   74485 cri.go:89] found id: ""
	I1105 19:13:55.105443   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.105451   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:55.105456   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:55.105511   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:55.137363   74485 cri.go:89] found id: ""
	I1105 19:13:55.137395   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.137406   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:55.137416   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:55.137431   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:55.174176   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:55.174201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:55.221658   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:55.221693   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:55.235044   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:55.235070   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:55.308192   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:55.308218   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:55.308234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:54.461424   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:56.961198   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.223920   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.224915   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.847329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:00.347371   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.892462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:57.905472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:57.905543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:57.946044   74485 cri.go:89] found id: ""
	I1105 19:13:57.946071   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.946081   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:57.946089   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:57.946149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:57.980762   74485 cri.go:89] found id: ""
	I1105 19:13:57.980791   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.980803   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:57.980811   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:57.980874   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:58.013351   74485 cri.go:89] found id: ""
	I1105 19:13:58.013374   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.013381   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:58.013386   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:58.013433   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:58.049056   74485 cri.go:89] found id: ""
	I1105 19:13:58.049083   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.049091   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:58.049097   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:58.049147   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:58.081476   74485 cri.go:89] found id: ""
	I1105 19:13:58.081507   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.081517   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:58.081524   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:58.081583   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:58.114526   74485 cri.go:89] found id: ""
	I1105 19:13:58.114554   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.114564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:58.114571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:58.114630   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:58.148219   74485 cri.go:89] found id: ""
	I1105 19:13:58.148243   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.148252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:58.148257   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:58.148312   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:58.183254   74485 cri.go:89] found id: ""
	I1105 19:13:58.183277   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.183285   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:58.183292   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:58.183304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:58.234747   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:58.234785   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:58.248269   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:58.248300   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:58.313290   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:58.313312   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:58.313327   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:58.389847   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:58.389889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:00.927957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:00.941525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:00.941593   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:00.974891   74485 cri.go:89] found id: ""
	I1105 19:14:00.974920   74485 logs.go:282] 0 containers: []
	W1105 19:14:00.974931   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:00.974938   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:00.975018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:01.008224   74485 cri.go:89] found id: ""
	I1105 19:14:01.008250   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.008262   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:01.008270   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:01.008328   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:01.044514   74485 cri.go:89] found id: ""
	I1105 19:14:01.044545   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.044553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:01.044559   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:01.044614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:01.077091   74485 cri.go:89] found id: ""
	I1105 19:14:01.077124   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.077135   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:01.077141   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:01.077197   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:01.109947   74485 cri.go:89] found id: ""
	I1105 19:14:01.109976   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.109986   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:01.109994   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:01.110054   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:01.146162   74485 cri.go:89] found id: ""
	I1105 19:14:01.146193   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.146203   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:01.146211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:01.146275   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:01.180335   74485 cri.go:89] found id: ""
	I1105 19:14:01.180360   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.180370   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:01.180377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:01.180436   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:01.216160   74485 cri.go:89] found id: ""
	I1105 19:14:01.216189   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.216199   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:01.216221   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:01.216236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:01.229426   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:01.229455   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:01.298847   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:01.298874   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:01.298889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:01.375255   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:01.375299   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:01.417946   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:01.418026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:59.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.961362   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:59.724103   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.724976   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.725344   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:02.349032   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:04.847734   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.973713   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:03.987128   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:03.987198   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:04.020050   74485 cri.go:89] found id: ""
	I1105 19:14:04.020081   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.020091   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:04.020098   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:04.020164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:04.053458   74485 cri.go:89] found id: ""
	I1105 19:14:04.053485   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.053492   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:04.053498   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:04.053544   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:04.086417   74485 cri.go:89] found id: ""
	I1105 19:14:04.086442   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.086455   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:04.086461   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:04.086513   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:04.122035   74485 cri.go:89] found id: ""
	I1105 19:14:04.122059   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.122067   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:04.122073   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:04.122120   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:04.158732   74485 cri.go:89] found id: ""
	I1105 19:14:04.158758   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.158765   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:04.158771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:04.158822   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:04.190497   74485 cri.go:89] found id: ""
	I1105 19:14:04.190525   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.190536   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:04.190543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:04.190604   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:04.222040   74485 cri.go:89] found id: ""
	I1105 19:14:04.222066   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.222074   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:04.222079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:04.222131   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:04.258753   74485 cri.go:89] found id: ""
	I1105 19:14:04.258781   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.258793   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:04.258804   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:04.258819   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:04.299966   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:04.300052   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:04.355364   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:04.355395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:04.368954   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:04.368980   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:04.431658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:04.431688   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:04.431700   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.015289   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:07.029580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:07.029644   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:07.066931   74485 cri.go:89] found id: ""
	I1105 19:14:07.066964   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.066993   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:07.067004   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:07.067059   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:07.104315   74485 cri.go:89] found id: ""
	I1105 19:14:07.104341   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.104349   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:07.104354   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:07.104401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:07.141271   74485 cri.go:89] found id: ""
	I1105 19:14:07.141298   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.141305   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:07.141311   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:07.141360   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:07.174600   74485 cri.go:89] found id: ""
	I1105 19:14:07.174631   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.174643   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:07.174653   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:07.174707   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:07.211920   74485 cri.go:89] found id: ""
	I1105 19:14:07.211958   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.211969   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:07.211975   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:07.212027   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:07.248238   74485 cri.go:89] found id: ""
	I1105 19:14:07.248269   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.248280   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:07.248286   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:07.248344   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:07.279833   74485 cri.go:89] found id: ""
	I1105 19:14:07.279864   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.279874   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:07.279881   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:07.279931   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:07.317411   74485 cri.go:89] found id: ""
	I1105 19:14:07.317441   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.317452   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:07.317461   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:07.317474   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:07.390499   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:07.390535   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:07.390556   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.488858   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:07.488895   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:07.528612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:07.528645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:07.581884   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:07.581927   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:03.961433   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.460953   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.223402   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:08.723797   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:07.348258   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:09.846465   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.096089   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:10.110828   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:10.110898   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:10.147299   74485 cri.go:89] found id: ""
	I1105 19:14:10.147332   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.147344   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:10.147350   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:10.147401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:10.181457   74485 cri.go:89] found id: ""
	I1105 19:14:10.181482   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.181489   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:10.181495   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:10.181540   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:10.215210   74485 cri.go:89] found id: ""
	I1105 19:14:10.215241   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.215252   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:10.215259   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:10.215319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:10.249587   74485 cri.go:89] found id: ""
	I1105 19:14:10.249609   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.249617   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:10.249625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:10.249679   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:10.282566   74485 cri.go:89] found id: ""
	I1105 19:14:10.282591   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.282598   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:10.282604   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:10.282672   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:10.314312   74485 cri.go:89] found id: ""
	I1105 19:14:10.314344   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.314355   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:10.314361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:10.314415   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:10.346988   74485 cri.go:89] found id: ""
	I1105 19:14:10.347016   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.347028   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:10.347035   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:10.347088   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:10.381326   74485 cri.go:89] found id: ""
	I1105 19:14:10.381354   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.381370   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:10.381380   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:10.381394   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:10.418311   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:10.418344   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:10.469559   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:10.469590   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:10.482394   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:10.482427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:10.551831   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:10.551854   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:10.551870   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:08.462072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.961478   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:12.724974   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:11.846737   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:14.346050   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:13.127576   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:13.143182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:13.143242   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:13.188794   74485 cri.go:89] found id: ""
	I1105 19:14:13.188827   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.188839   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:13.188846   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:13.188897   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:13.221790   74485 cri.go:89] found id: ""
	I1105 19:14:13.221818   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.221829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:13.221836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:13.221893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:13.255164   74485 cri.go:89] found id: ""
	I1105 19:14:13.255194   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.255205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:13.255212   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:13.255272   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:13.288203   74485 cri.go:89] found id: ""
	I1105 19:14:13.288231   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.288241   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:13.288249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:13.288307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:13.321438   74485 cri.go:89] found id: ""
	I1105 19:14:13.321463   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.321475   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:13.321482   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:13.321541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:13.361858   74485 cri.go:89] found id: ""
	I1105 19:14:13.361886   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.361897   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:13.361905   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:13.361979   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:13.394210   74485 cri.go:89] found id: ""
	I1105 19:14:13.394239   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.394252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:13.394260   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:13.394324   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:13.434665   74485 cri.go:89] found id: ""
	I1105 19:14:13.434697   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.434705   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:13.434712   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:13.434724   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:13.447849   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:13.447875   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:13.514353   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:13.514377   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:13.514390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:13.590746   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:13.590784   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:13.627704   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:13.627732   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:16.180171   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:16.193282   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:16.193342   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:16.230087   74485 cri.go:89] found id: ""
	I1105 19:14:16.230118   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.230128   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:16.230137   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:16.230200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:16.264315   74485 cri.go:89] found id: ""
	I1105 19:14:16.264348   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.264360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:16.264368   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:16.264429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:16.298197   74485 cri.go:89] found id: ""
	I1105 19:14:16.298231   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.298243   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:16.298251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:16.298316   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:16.333149   74485 cri.go:89] found id: ""
	I1105 19:14:16.333180   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.333193   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:16.333203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:16.333268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:16.366863   74485 cri.go:89] found id: ""
	I1105 19:14:16.366887   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.366895   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:16.366900   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:16.366947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:16.400434   74485 cri.go:89] found id: ""
	I1105 19:14:16.400458   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.400466   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:16.400472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:16.400524   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:16.435475   74485 cri.go:89] found id: ""
	I1105 19:14:16.435497   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.435504   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:16.435510   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:16.435560   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:16.470577   74485 cri.go:89] found id: ""
	I1105 19:14:16.470604   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.470612   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:16.470620   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:16.470632   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:16.483061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:16.483094   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:16.550662   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:16.550690   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:16.550702   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:16.629372   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:16.629411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:16.669488   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:16.669526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:12.961576   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.461132   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.461748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.224068   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.225065   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:16.347305   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:18.847161   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.219244   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:19.232682   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:19.232744   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:19.264594   74485 cri.go:89] found id: ""
	I1105 19:14:19.264624   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.264635   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:19.264649   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:19.264708   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:19.301434   74485 cri.go:89] found id: ""
	I1105 19:14:19.301468   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.301479   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:19.301487   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:19.301558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:19.333465   74485 cri.go:89] found id: ""
	I1105 19:14:19.333494   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.333502   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:19.333508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:19.333558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:19.365865   74485 cri.go:89] found id: ""
	I1105 19:14:19.365892   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.365900   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:19.365906   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:19.365958   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:19.406533   74485 cri.go:89] found id: ""
	I1105 19:14:19.406563   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.406575   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:19.406583   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:19.406639   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:19.439351   74485 cri.go:89] found id: ""
	I1105 19:14:19.439377   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.439386   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:19.439392   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:19.439438   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:19.475033   74485 cri.go:89] found id: ""
	I1105 19:14:19.475058   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.475065   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:19.475070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:19.475119   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:19.508638   74485 cri.go:89] found id: ""
	I1105 19:14:19.508662   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.508670   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:19.508678   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:19.508689   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:19.588268   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:19.588293   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:19.588304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:19.671382   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:19.671415   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:19.716497   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:19.716526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:19.769686   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:19.769722   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.283476   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:22.296393   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:22.296456   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:22.331226   74485 cri.go:89] found id: ""
	I1105 19:14:22.331247   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.331255   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:22.331261   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:22.331306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:22.363466   74485 cri.go:89] found id: ""
	I1105 19:14:22.363499   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.363510   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:22.363518   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:22.363586   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:22.397025   74485 cri.go:89] found id: ""
	I1105 19:14:22.397052   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.397061   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:22.397066   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:22.397116   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:22.429450   74485 cri.go:89] found id: ""
	I1105 19:14:22.429476   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.429486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:22.429493   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:22.429554   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:22.461615   74485 cri.go:89] found id: ""
	I1105 19:14:22.461643   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.461654   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:22.461660   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:22.461728   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:22.492470   74485 cri.go:89] found id: ""
	I1105 19:14:22.492502   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.492513   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:22.492521   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:22.492587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:22.525335   74485 cri.go:89] found id: ""
	I1105 19:14:22.525358   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.525366   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:22.525372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:22.525423   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:22.558854   74485 cri.go:89] found id: ""
	I1105 19:14:22.558881   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.558890   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:22.558901   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:22.558916   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:22.608638   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:22.608674   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.621769   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:22.621800   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:14:19.461812   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.960286   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.724482   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:22.224505   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:24.225072   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.347018   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:23.347099   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	W1105 19:14:22.688971   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:22.688998   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:22.689012   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:22.770517   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:22.770558   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:25.315778   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:25.335372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:25.335444   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:25.383988   74485 cri.go:89] found id: ""
	I1105 19:14:25.384019   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.384029   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:25.384036   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:25.384096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:25.432070   74485 cri.go:89] found id: ""
	I1105 19:14:25.432103   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.432115   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:25.432122   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:25.432184   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:25.464859   74485 cri.go:89] found id: ""
	I1105 19:14:25.464891   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.464902   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:25.464909   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:25.464976   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:25.498684   74485 cri.go:89] found id: ""
	I1105 19:14:25.498712   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.498719   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:25.498724   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:25.498777   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:25.532998   74485 cri.go:89] found id: ""
	I1105 19:14:25.533023   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.533032   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:25.533039   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:25.533084   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:25.568101   74485 cri.go:89] found id: ""
	I1105 19:14:25.568130   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.568138   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:25.568144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:25.568208   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:25.600470   74485 cri.go:89] found id: ""
	I1105 19:14:25.600495   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.600503   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:25.600509   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:25.600564   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:25.631792   74485 cri.go:89] found id: ""
	I1105 19:14:25.631824   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.631834   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:25.631845   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:25.631860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:25.683820   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:25.683856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:25.698066   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:25.698095   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:25.764838   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:25.764869   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:25.764886   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:25.838791   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:25.838828   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:23.966002   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.460153   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.724324   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:29.223490   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:25.847528   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.346739   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.376183   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:28.389686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:28.389760   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:28.424180   74485 cri.go:89] found id: ""
	I1105 19:14:28.424209   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.424221   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:28.424229   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:28.424289   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:28.462742   74485 cri.go:89] found id: ""
	I1105 19:14:28.462765   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.462777   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:28.462784   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:28.462839   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:28.494550   74485 cri.go:89] found id: ""
	I1105 19:14:28.494574   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.494581   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:28.494588   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:28.494667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:28.525606   74485 cri.go:89] found id: ""
	I1105 19:14:28.525632   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.525639   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:28.525645   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:28.525696   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:28.558599   74485 cri.go:89] found id: ""
	I1105 19:14:28.558628   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.558638   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:28.558644   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:28.558701   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:28.590496   74485 cri.go:89] found id: ""
	I1105 19:14:28.590522   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.590530   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:28.590535   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:28.590599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:28.622748   74485 cri.go:89] found id: ""
	I1105 19:14:28.622772   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.622780   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:28.622786   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:28.622836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:28.656452   74485 cri.go:89] found id: ""
	I1105 19:14:28.656477   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.656485   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:28.656493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:28.656504   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.736458   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:28.736505   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:28.771923   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:28.771954   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:28.821099   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:28.821133   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:28.834698   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:28.834726   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:28.900543   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.400733   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:31.414573   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:31.414647   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:31.452244   74485 cri.go:89] found id: ""
	I1105 19:14:31.452275   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.452286   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:31.452293   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:31.452353   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:31.485898   74485 cri.go:89] found id: ""
	I1105 19:14:31.485920   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.485935   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:31.485940   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:31.486009   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:31.522826   74485 cri.go:89] found id: ""
	I1105 19:14:31.522850   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.522858   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:31.522865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:31.522925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:31.560096   74485 cri.go:89] found id: ""
	I1105 19:14:31.560136   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.560164   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:31.560174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:31.560234   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:31.596698   74485 cri.go:89] found id: ""
	I1105 19:14:31.596725   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.596733   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:31.596738   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:31.596792   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:31.635109   74485 cri.go:89] found id: ""
	I1105 19:14:31.635138   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.635148   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:31.635156   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:31.635221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:31.667612   74485 cri.go:89] found id: ""
	I1105 19:14:31.667639   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.667651   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:31.667658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:31.667726   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:31.699815   74485 cri.go:89] found id: ""
	I1105 19:14:31.699844   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.699854   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:31.699864   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:31.699879   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:31.737165   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:31.737196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:31.788513   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:31.788550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:31.801580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:31.801609   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:31.871658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.871683   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:31.871696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.462108   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.961875   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:31.223977   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:33.724027   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.847090   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:32.847233   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.847857   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.450954   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:34.466129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:34.466204   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:34.499984   74485 cri.go:89] found id: ""
	I1105 19:14:34.500009   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.500020   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:34.500027   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:34.500091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:34.532923   74485 cri.go:89] found id: ""
	I1105 19:14:34.532950   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.532958   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:34.532969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:34.533017   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:34.566772   74485 cri.go:89] found id: ""
	I1105 19:14:34.566803   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.566811   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:34.566817   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:34.566872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:34.607398   74485 cri.go:89] found id: ""
	I1105 19:14:34.607422   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.607430   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:34.607435   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:34.607497   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:34.640091   74485 cri.go:89] found id: ""
	I1105 19:14:34.640123   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.640135   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:34.640143   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:34.640207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:34.677164   74485 cri.go:89] found id: ""
	I1105 19:14:34.677201   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.677211   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:34.677217   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:34.677266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:34.714900   74485 cri.go:89] found id: ""
	I1105 19:14:34.714931   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.714942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:34.714949   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:34.715023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:34.751003   74485 cri.go:89] found id: ""
	I1105 19:14:34.751032   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.751040   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:34.751048   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:34.751059   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:34.822279   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:34.822301   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:34.822315   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:34.898607   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:34.898640   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:34.934727   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:34.934754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:34.985935   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:34.985969   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.500117   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:37.512467   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:37.512541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:37.544914   74485 cri.go:89] found id: ""
	I1105 19:14:37.544941   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.544952   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:37.544959   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:37.545028   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:37.581507   74485 cri.go:89] found id: ""
	I1105 19:14:37.581535   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.581545   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:37.581553   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:37.581612   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:37.615546   74485 cri.go:89] found id: ""
	I1105 19:14:37.615576   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.615585   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:37.615592   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:37.615667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:37.648239   74485 cri.go:89] found id: ""
	I1105 19:14:37.648267   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.648276   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:37.648283   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:37.648343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:33.460860   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:35.461416   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:36.224852   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:38.725488   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.347563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:39.347732   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.682861   74485 cri.go:89] found id: ""
	I1105 19:14:37.682891   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.682898   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:37.682904   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:37.682952   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:37.715506   74485 cri.go:89] found id: ""
	I1105 19:14:37.715532   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.715540   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:37.715547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:37.715597   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:37.747973   74485 cri.go:89] found id: ""
	I1105 19:14:37.748003   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.748014   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:37.748022   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:37.748083   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:37.780270   74485 cri.go:89] found id: ""
	I1105 19:14:37.780294   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.780302   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:37.780310   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:37.780321   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.793885   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:37.793914   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:37.860114   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:37.860140   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:37.860154   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:37.941221   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:37.941255   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.980537   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:37.980567   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.532301   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:40.545540   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:40.545599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:40.578642   74485 cri.go:89] found id: ""
	I1105 19:14:40.578687   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.578699   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:40.578707   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:40.578772   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:40.612049   74485 cri.go:89] found id: ""
	I1105 19:14:40.612078   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.612089   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:40.612097   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:40.612159   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:40.644495   74485 cri.go:89] found id: ""
	I1105 19:14:40.644519   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.644527   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:40.644532   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:40.644587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:40.676890   74485 cri.go:89] found id: ""
	I1105 19:14:40.676923   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.676931   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:40.676937   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:40.676984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:40.710095   74485 cri.go:89] found id: ""
	I1105 19:14:40.710125   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.710136   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:40.710144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:40.710200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:40.748323   74485 cri.go:89] found id: ""
	I1105 19:14:40.748353   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.748364   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:40.748372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:40.748501   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:40.781578   74485 cri.go:89] found id: ""
	I1105 19:14:40.781606   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.781618   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:40.781626   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:40.781689   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:40.816010   74485 cri.go:89] found id: ""
	I1105 19:14:40.816048   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.816060   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:40.816071   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:40.816086   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.869836   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:40.869876   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:40.883436   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:40.883471   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:40.946538   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:40.946566   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:40.946585   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:41.023085   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:41.023123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.962163   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.461278   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.726894   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.224939   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:41.847053   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:44.346789   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.566841   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:43.579425   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:43.579498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:43.620500   74485 cri.go:89] found id: ""
	I1105 19:14:43.620526   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.620535   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:43.620541   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:43.620600   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:43.652992   74485 cri.go:89] found id: ""
	I1105 19:14:43.653024   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.653035   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:43.653042   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:43.653105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:43.686945   74485 cri.go:89] found id: ""
	I1105 19:14:43.686991   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.687003   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:43.687010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:43.687124   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:43.720075   74485 cri.go:89] found id: ""
	I1105 19:14:43.720103   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.720114   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:43.720121   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:43.720179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:43.757969   74485 cri.go:89] found id: ""
	I1105 19:14:43.757997   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.758005   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:43.758011   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:43.758071   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:43.790068   74485 cri.go:89] found id: ""
	I1105 19:14:43.790094   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.790103   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:43.790109   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:43.790153   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:43.821696   74485 cri.go:89] found id: ""
	I1105 19:14:43.821722   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.821733   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:43.821741   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:43.821803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:43.855976   74485 cri.go:89] found id: ""
	I1105 19:14:43.856003   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.856011   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:43.856019   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:43.856029   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:43.934375   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:43.934409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:43.972567   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:43.972597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:44.025660   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:44.025696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:44.039229   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:44.039258   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:44.112179   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:46.612815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:46.626070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:46.626145   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:46.659184   74485 cri.go:89] found id: ""
	I1105 19:14:46.659210   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.659218   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:46.659227   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:46.659288   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:46.691887   74485 cri.go:89] found id: ""
	I1105 19:14:46.691917   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.691928   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:46.691934   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:46.692003   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:46.725745   74485 cri.go:89] found id: ""
	I1105 19:14:46.725776   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.725787   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:46.725795   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:46.725847   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:46.761733   74485 cri.go:89] found id: ""
	I1105 19:14:46.761762   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.761773   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:46.761780   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:46.761842   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:46.792926   74485 cri.go:89] found id: ""
	I1105 19:14:46.792955   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.792966   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:46.792974   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:46.793036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:46.824462   74485 cri.go:89] found id: ""
	I1105 19:14:46.824503   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.824512   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:46.824519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:46.824580   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:46.865057   74485 cri.go:89] found id: ""
	I1105 19:14:46.865082   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.865090   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:46.865095   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:46.865146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:46.901357   74485 cri.go:89] found id: ""
	I1105 19:14:46.901385   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.901393   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:46.901401   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:46.901414   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:46.951986   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:46.952021   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:46.966035   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:46.966065   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:47.035163   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:47.035184   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:47.035196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:47.115825   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:47.115860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:42.961397   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.460846   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.724189   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.724319   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:46.847553   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.346787   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.658737   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:49.672088   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:49.672182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:49.708638   74485 cri.go:89] found id: ""
	I1105 19:14:49.708666   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.708674   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:49.708679   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:49.708736   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:49.744485   74485 cri.go:89] found id: ""
	I1105 19:14:49.744513   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.744521   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:49.744526   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:49.744572   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:49.779758   74485 cri.go:89] found id: ""
	I1105 19:14:49.779785   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.779794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:49.779800   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:49.779858   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:49.814216   74485 cri.go:89] found id: ""
	I1105 19:14:49.814248   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.814256   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:49.814262   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:49.814310   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:49.851348   74485 cri.go:89] found id: ""
	I1105 19:14:49.851377   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.851389   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:49.851396   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:49.851455   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:49.883866   74485 cri.go:89] found id: ""
	I1105 19:14:49.883897   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.883906   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:49.883912   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:49.883959   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:49.916944   74485 cri.go:89] found id: ""
	I1105 19:14:49.916967   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.916975   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:49.916980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:49.917039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:49.950405   74485 cri.go:89] found id: ""
	I1105 19:14:49.950437   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.950449   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:49.950459   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:49.950475   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:49.996064   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:49.996102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:50.044865   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:50.044902   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:50.058206   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:50.058236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:50.130371   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:50.130397   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:50.130412   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:49.960550   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.961271   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.724896   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.224128   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.346823   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:53.847102   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.706441   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:52.719571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:52.719655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:52.753850   74485 cri.go:89] found id: ""
	I1105 19:14:52.753880   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.753891   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:52.753899   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:52.753961   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:52.794112   74485 cri.go:89] found id: ""
	I1105 19:14:52.794139   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.794149   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:52.794156   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:52.794218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:52.830151   74485 cri.go:89] found id: ""
	I1105 19:14:52.830178   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.830188   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:52.830195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:52.830258   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:52.864803   74485 cri.go:89] found id: ""
	I1105 19:14:52.864832   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.864853   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:52.864868   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:52.864930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:52.897237   74485 cri.go:89] found id: ""
	I1105 19:14:52.897271   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.897282   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:52.897289   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:52.897351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:52.932236   74485 cri.go:89] found id: ""
	I1105 19:14:52.932262   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.932270   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:52.932275   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:52.932319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:52.965781   74485 cri.go:89] found id: ""
	I1105 19:14:52.965808   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.965817   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:52.965825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:52.965918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:52.999098   74485 cri.go:89] found id: ""
	I1105 19:14:52.999121   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.999129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:52.999137   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:52.999146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:53.051085   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:53.051127   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:53.064690   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:53.064717   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:53.128334   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:53.128358   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:53.128372   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:53.207751   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:53.207791   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:55.745430   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:55.758734   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:55.758821   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:55.791827   74485 cri.go:89] found id: ""
	I1105 19:14:55.791854   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.791862   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:55.791868   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:55.791922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:55.824191   74485 cri.go:89] found id: ""
	I1105 19:14:55.824217   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.824224   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:55.824230   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:55.824278   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:55.858579   74485 cri.go:89] found id: ""
	I1105 19:14:55.858611   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.858619   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:55.858625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:55.858673   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:55.891579   74485 cri.go:89] found id: ""
	I1105 19:14:55.891604   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.891612   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:55.891617   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:55.891663   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:55.924881   74485 cri.go:89] found id: ""
	I1105 19:14:55.924910   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.924920   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:55.924930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:55.924999   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:55.956634   74485 cri.go:89] found id: ""
	I1105 19:14:55.956663   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.956678   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:55.956686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:55.956742   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:55.988770   74485 cri.go:89] found id: ""
	I1105 19:14:55.988803   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.988814   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:55.988821   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:55.988880   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:56.022236   74485 cri.go:89] found id: ""
	I1105 19:14:56.022257   74485 logs.go:282] 0 containers: []
	W1105 19:14:56.022266   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:56.022273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:56.022284   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:56.073035   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:56.073071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:56.086899   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:56.086923   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:56.158219   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:56.158247   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:56.158259   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:56.246621   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:56.246660   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:53.962537   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.461516   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:54.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.725381   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:59.223995   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:55.847591   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.346027   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:00.349718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.791443   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:58.804398   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:58.804476   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:58.837812   74485 cri.go:89] found id: ""
	I1105 19:14:58.837840   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.837856   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:58.837863   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:58.837926   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:58.870154   74485 cri.go:89] found id: ""
	I1105 19:14:58.870186   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.870197   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:58.870204   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:58.870268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:58.906518   74485 cri.go:89] found id: ""
	I1105 19:14:58.906545   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.906553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:58.906563   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:58.906614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:58.939320   74485 cri.go:89] found id: ""
	I1105 19:14:58.939346   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.939357   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:58.939364   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:58.939426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:58.974116   74485 cri.go:89] found id: ""
	I1105 19:14:58.974143   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.974153   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:58.974160   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:58.974221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:59.006820   74485 cri.go:89] found id: ""
	I1105 19:14:59.006854   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.006866   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:59.006873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:59.006933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:59.039691   74485 cri.go:89] found id: ""
	I1105 19:14:59.039723   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.039735   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:59.039742   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:59.039800   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:59.071829   74485 cri.go:89] found id: ""
	I1105 19:14:59.071860   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.071881   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:59.071893   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:59.071906   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:59.124158   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:59.124195   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:59.138563   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:59.138594   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:59.216148   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:59.216174   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:59.216189   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:59.295262   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:59.295297   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:01.833789   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:01.847332   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:01.847408   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:01.882721   74485 cri.go:89] found id: ""
	I1105 19:15:01.882743   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.882750   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:01.882755   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:01.882811   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:01.916457   74485 cri.go:89] found id: ""
	I1105 19:15:01.916479   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.916487   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:01.916502   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:01.916557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:01.950521   74485 cri.go:89] found id: ""
	I1105 19:15:01.950552   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.950564   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:01.950571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:01.950624   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:01.985823   74485 cri.go:89] found id: ""
	I1105 19:15:01.985852   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.985862   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:01.985870   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:01.985918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:02.021689   74485 cri.go:89] found id: ""
	I1105 19:15:02.021720   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.021731   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:02.021739   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:02.021804   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:02.058632   74485 cri.go:89] found id: ""
	I1105 19:15:02.058658   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.058666   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:02.058672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:02.058738   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:02.097916   74485 cri.go:89] found id: ""
	I1105 19:15:02.097947   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.097956   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:02.097961   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:02.098010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:02.131992   74485 cri.go:89] found id: ""
	I1105 19:15:02.132027   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.132038   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:02.132050   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:02.132066   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:02.188605   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:02.188645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:02.201873   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:02.201904   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:02.274767   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:02.274795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:02.274811   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:02.358520   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:02.358559   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:58.962072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.461009   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.224719   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:03.724333   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:02.847593   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.348665   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:04.897693   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:04.913131   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:04.913207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:04.952546   74485 cri.go:89] found id: ""
	I1105 19:15:04.952571   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.952579   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:04.952584   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:04.952643   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:04.987334   74485 cri.go:89] found id: ""
	I1105 19:15:04.987360   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.987368   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:04.987374   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:04.987434   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:05.021873   74485 cri.go:89] found id: ""
	I1105 19:15:05.021906   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.021919   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:05.021926   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:05.021985   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:05.056169   74485 cri.go:89] found id: ""
	I1105 19:15:05.056199   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.056208   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:05.056213   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:05.056265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:05.093090   74485 cri.go:89] found id: ""
	I1105 19:15:05.093117   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.093125   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:05.093130   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:05.093182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:05.127533   74485 cri.go:89] found id: ""
	I1105 19:15:05.127557   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.127564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:05.127576   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:05.127625   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:05.165127   74485 cri.go:89] found id: ""
	I1105 19:15:05.165162   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.165173   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:05.165180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:05.165243   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:05.200526   74485 cri.go:89] found id: ""
	I1105 19:15:05.200556   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.200567   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:05.200578   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:05.200593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:05.247497   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:05.247535   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:05.261963   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:05.261996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:05.336813   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:05.336833   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:05.336844   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:05.412278   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:05.412320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:03.461266   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.463142   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.728530   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:08.227700   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.848748   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:10.346754   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.951085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:07.966125   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:07.966203   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:08.004253   74485 cri.go:89] found id: ""
	I1105 19:15:08.004291   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.004302   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:08.004310   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:08.004373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:08.039539   74485 cri.go:89] found id: ""
	I1105 19:15:08.039562   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.039569   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:08.039575   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:08.039629   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:08.076043   74485 cri.go:89] found id: ""
	I1105 19:15:08.076080   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.076093   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:08.076101   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:08.076157   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:08.110489   74485 cri.go:89] found id: ""
	I1105 19:15:08.110512   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.110519   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:08.110525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:08.110589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:08.147532   74485 cri.go:89] found id: ""
	I1105 19:15:08.147564   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.147574   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:08.147580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:08.147628   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:08.182225   74485 cri.go:89] found id: ""
	I1105 19:15:08.182248   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.182256   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:08.182263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:08.182322   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:08.223488   74485 cri.go:89] found id: ""
	I1105 19:15:08.223524   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.223536   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:08.223544   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:08.223610   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:08.266524   74485 cri.go:89] found id: ""
	I1105 19:15:08.266559   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.266571   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:08.266582   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:08.266597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:08.279036   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:08.279061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:08.346030   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:08.346052   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:08.346064   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:08.428081   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:08.428118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:08.464760   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:08.464789   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.016193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:11.030598   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:11.030681   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:11.066035   74485 cri.go:89] found id: ""
	I1105 19:15:11.066064   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.066073   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:11.066078   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:11.066133   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:11.103906   74485 cri.go:89] found id: ""
	I1105 19:15:11.103937   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.103948   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:11.103955   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:11.104023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:11.142936   74485 cri.go:89] found id: ""
	I1105 19:15:11.143024   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.143034   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:11.143041   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:11.143091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:11.180041   74485 cri.go:89] found id: ""
	I1105 19:15:11.180074   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.180086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:11.180094   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:11.180158   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:11.215661   74485 cri.go:89] found id: ""
	I1105 19:15:11.215693   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.215701   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:11.215707   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:11.215758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:11.252603   74485 cri.go:89] found id: ""
	I1105 19:15:11.252651   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.252663   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:11.252672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:11.252739   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:11.299295   74485 cri.go:89] found id: ""
	I1105 19:15:11.299328   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.299340   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:11.299347   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:11.299402   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:11.355153   74485 cri.go:89] found id: ""
	I1105 19:15:11.355177   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.355185   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:11.355193   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:11.355206   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:11.441076   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:11.441118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:11.480367   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:11.480396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.534646   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:11.534683   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:11.548141   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:11.548170   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:11.616452   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:07.961073   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:09.962118   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.455874   73732 pod_ready.go:82] duration metric: took 4m0.000853559s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:12.455911   73732 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:15:12.455936   73732 pod_ready.go:39] duration metric: took 4m14.55377544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:12.455984   73732 kubeadm.go:597] duration metric: took 4m23.030552871s to restartPrimaryControlPlane
	W1105 19:15:12.456078   73732 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:12.456111   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:10.724247   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.725886   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.846646   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.848074   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.117448   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:14.131224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:14.131297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:14.167811   74485 cri.go:89] found id: ""
	I1105 19:15:14.167843   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.167855   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:14.167862   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:14.167921   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:14.204128   74485 cri.go:89] found id: ""
	I1105 19:15:14.204156   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.204164   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:14.204169   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:14.204232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:14.240687   74485 cri.go:89] found id: ""
	I1105 19:15:14.240716   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.240727   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:14.240735   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:14.240788   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:14.274204   74485 cri.go:89] found id: ""
	I1105 19:15:14.274231   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.274242   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:14.274249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:14.274307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:14.312090   74485 cri.go:89] found id: ""
	I1105 19:15:14.312119   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.312130   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:14.312139   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:14.312200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:14.346824   74485 cri.go:89] found id: ""
	I1105 19:15:14.346857   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.346868   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:14.346875   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:14.346934   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:14.380634   74485 cri.go:89] found id: ""
	I1105 19:15:14.380668   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.380679   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:14.380686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:14.380746   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:14.414402   74485 cri.go:89] found id: ""
	I1105 19:15:14.414432   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.414441   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:14.414449   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:14.414459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:14.464542   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:14.464581   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:14.478195   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:14.478225   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:14.553670   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:14.553693   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:14.553708   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:14.634619   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:14.634659   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.174085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:17.191712   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:17.191771   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:17.234101   74485 cri.go:89] found id: ""
	I1105 19:15:17.234132   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.234143   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:17.234149   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:17.234213   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:17.281548   74485 cri.go:89] found id: ""
	I1105 19:15:17.281574   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.281581   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:17.281588   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:17.281655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:17.337698   74485 cri.go:89] found id: ""
	I1105 19:15:17.337727   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.337735   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:17.337743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:17.337790   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:17.371756   74485 cri.go:89] found id: ""
	I1105 19:15:17.371782   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.371790   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:17.371796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:17.371854   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:17.404989   74485 cri.go:89] found id: ""
	I1105 19:15:17.405015   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.405026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:17.405033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:17.405096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:17.438613   74485 cri.go:89] found id: ""
	I1105 19:15:17.438637   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.438648   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:17.438656   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:17.438717   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:17.470465   74485 cri.go:89] found id: ""
	I1105 19:15:17.470494   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.470502   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:17.470508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:17.470558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:17.503835   74485 cri.go:89] found id: ""
	I1105 19:15:17.503867   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.503876   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:17.503884   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:17.503896   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:17.584110   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:17.584146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.626928   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:17.626955   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:15.223749   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.225434   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.347847   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:19.847047   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.679356   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:17.679397   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:17.693476   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:17.693506   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:17.766809   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.266926   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:20.282219   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:20.282293   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:20.322133   74485 cri.go:89] found id: ""
	I1105 19:15:20.322163   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.322171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:20.322178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:20.322248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:20.357030   74485 cri.go:89] found id: ""
	I1105 19:15:20.357072   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.357084   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:20.357091   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:20.357156   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:20.390523   74485 cri.go:89] found id: ""
	I1105 19:15:20.390549   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.390559   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:20.390567   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:20.390631   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:20.425807   74485 cri.go:89] found id: ""
	I1105 19:15:20.425830   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.425837   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:20.425843   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:20.425903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:20.461984   74485 cri.go:89] found id: ""
	I1105 19:15:20.462014   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.462026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:20.462033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:20.462094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:20.495689   74485 cri.go:89] found id: ""
	I1105 19:15:20.495725   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.495739   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:20.495746   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:20.495799   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:20.528666   74485 cri.go:89] found id: ""
	I1105 19:15:20.528701   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.528713   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:20.528721   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:20.528783   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:20.562566   74485 cri.go:89] found id: ""
	I1105 19:15:20.562596   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.562606   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:20.562614   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:20.562624   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:20.610961   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:20.611000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:20.623898   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:20.623928   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:20.696412   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.696440   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:20.696456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:20.779601   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:20.779642   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:19.725198   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.224019   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.225286   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.347992   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.846718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:23.319846   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:23.333278   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:23.333357   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:23.370771   74485 cri.go:89] found id: ""
	I1105 19:15:23.370796   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.370805   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:23.370810   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:23.370872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:23.405994   74485 cri.go:89] found id: ""
	I1105 19:15:23.406021   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.406029   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:23.406034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:23.406092   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:23.443729   74485 cri.go:89] found id: ""
	I1105 19:15:23.443757   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.443767   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:23.443774   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:23.443836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:23.476162   74485 cri.go:89] found id: ""
	I1105 19:15:23.476188   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.476197   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:23.476205   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:23.476266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:23.509325   74485 cri.go:89] found id: ""
	I1105 19:15:23.509353   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.509363   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:23.509371   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:23.509427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:23.541880   74485 cri.go:89] found id: ""
	I1105 19:15:23.541912   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.541922   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:23.541929   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:23.541993   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:23.574204   74485 cri.go:89] found id: ""
	I1105 19:15:23.574236   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.574248   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:23.574256   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:23.574323   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:23.606865   74485 cri.go:89] found id: ""
	I1105 19:15:23.606896   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.606908   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:23.606918   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:23.606932   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:23.673771   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:23.673792   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:23.673803   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:23.753298   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:23.753335   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:23.792273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:23.792304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:23.843072   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:23.843110   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.356859   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:26.369417   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:26.369488   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:26.403611   74485 cri.go:89] found id: ""
	I1105 19:15:26.403639   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.403647   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:26.403653   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:26.403725   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:26.439891   74485 cri.go:89] found id: ""
	I1105 19:15:26.439924   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.439936   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:26.439943   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:26.439991   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:26.473502   74485 cri.go:89] found id: ""
	I1105 19:15:26.473542   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.473554   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:26.473561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:26.473640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:26.505666   74485 cri.go:89] found id: ""
	I1105 19:15:26.505695   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.505703   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:26.505710   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:26.505769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:26.539781   74485 cri.go:89] found id: ""
	I1105 19:15:26.539815   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.539827   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:26.539835   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:26.539911   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:26.574673   74485 cri.go:89] found id: ""
	I1105 19:15:26.574712   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.574721   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:26.574727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:26.574773   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:26.608410   74485 cri.go:89] found id: ""
	I1105 19:15:26.608433   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.608441   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:26.608446   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:26.608494   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:26.644036   74485 cri.go:89] found id: ""
	I1105 19:15:26.644065   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.644076   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:26.644087   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:26.644098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.718901   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:26.718937   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:26.758920   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:26.758953   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:26.811241   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:26.811277   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.824931   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:26.824961   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:26.891799   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:26.725062   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:27.724594   74141 pod_ready.go:82] duration metric: took 4m0.006622979s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:27.724627   74141 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1105 19:15:27.724644   74141 pod_ready.go:39] duration metric: took 4m0.807889519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:27.724663   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:15:27.724711   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:27.724769   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:27.771870   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:27.771897   74141 cri.go:89] found id: ""
	I1105 19:15:27.771906   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:27.771966   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.776484   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:27.776553   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:27.823529   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:27.823564   74141 cri.go:89] found id: ""
	I1105 19:15:27.823576   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:27.823638   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.828600   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:27.828685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:27.878206   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:27.878242   74141 cri.go:89] found id: ""
	I1105 19:15:27.878254   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:27.878317   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.882545   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:27.882640   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:27.920102   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:27.920127   74141 cri.go:89] found id: ""
	I1105 19:15:27.920137   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:27.920189   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.924516   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:27.924593   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:27.969493   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:27.969523   74141 cri.go:89] found id: ""
	I1105 19:15:27.969534   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:27.969589   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.973637   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:27.973724   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:28.014369   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.014396   74141 cri.go:89] found id: ""
	I1105 19:15:28.014405   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:28.014463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.019040   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:28.019112   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:28.056411   74141 cri.go:89] found id: ""
	I1105 19:15:28.056438   74141 logs.go:282] 0 containers: []
	W1105 19:15:28.056446   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:28.056452   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:28.056502   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:28.099541   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.099562   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.099566   74141 cri.go:89] found id: ""
	I1105 19:15:28.099573   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:28.099628   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.104144   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.108443   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:28.108465   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.153262   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:28.153302   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.197210   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:28.197237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:28.242915   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:28.242944   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:28.257468   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:28.257497   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:28.299795   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:28.299825   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:28.333983   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:28.334015   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:28.369174   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:28.369202   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:28.405838   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:28.405869   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:28.477842   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:28.477880   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:28.595832   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:28.595865   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:28.639146   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:28.639179   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.689519   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:28.689554   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.846977   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:28.847878   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:29.392417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:29.405249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:29.405331   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:29.437397   74485 cri.go:89] found id: ""
	I1105 19:15:29.437432   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.437443   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:29.437450   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:29.437504   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:29.469908   74485 cri.go:89] found id: ""
	I1105 19:15:29.469938   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.469946   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:29.469951   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:29.470008   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:29.502302   74485 cri.go:89] found id: ""
	I1105 19:15:29.502331   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.502339   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:29.502345   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:29.502391   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:29.534285   74485 cri.go:89] found id: ""
	I1105 19:15:29.534309   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.534317   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:29.534322   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:29.534373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:29.571918   74485 cri.go:89] found id: ""
	I1105 19:15:29.571962   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.571973   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:29.571983   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:29.572042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:29.605324   74485 cri.go:89] found id: ""
	I1105 19:15:29.605354   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.605365   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:29.605373   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:29.605435   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:29.640181   74485 cri.go:89] found id: ""
	I1105 19:15:29.640210   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.640218   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:29.640224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:29.640273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:29.671121   74485 cri.go:89] found id: ""
	I1105 19:15:29.671147   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.671155   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:29.671164   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:29.671174   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:29.750821   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:29.750856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:29.787452   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:29.787479   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:29.840413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:29.840459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:29.855540   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:29.855580   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:29.925849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:32.426016   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:32.438759   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:32.438824   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:32.476376   74485 cri.go:89] found id: ""
	I1105 19:15:32.476406   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.476416   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:32.476423   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:32.476490   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:32.512328   74485 cri.go:89] found id: ""
	I1105 19:15:32.512352   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.512360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:32.512365   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:32.512414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:32.546803   74485 cri.go:89] found id: ""
	I1105 19:15:32.546833   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.546844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:32.546851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:32.546925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:32.585904   74485 cri.go:89] found id: ""
	I1105 19:15:32.585934   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.585946   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:32.585953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:32.586014   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:32.620976   74485 cri.go:89] found id: ""
	I1105 19:15:32.621005   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.621012   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:32.621018   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:32.621082   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.668028   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:31.684024   74141 api_server.go:72] duration metric: took 4m12.496021782s to wait for apiserver process to appear ...
	I1105 19:15:31.684060   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:15:31.684105   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:31.684163   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:31.719462   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:31.719496   74141 cri.go:89] found id: ""
	I1105 19:15:31.719506   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:31.719559   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.723632   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:31.723702   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:31.761976   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:31.762001   74141 cri.go:89] found id: ""
	I1105 19:15:31.762010   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:31.762067   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.766066   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:31.766137   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:31.799673   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:31.799694   74141 cri.go:89] found id: ""
	I1105 19:15:31.799701   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:31.799753   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.803632   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:31.803714   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:31.841782   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:31.841808   74141 cri.go:89] found id: ""
	I1105 19:15:31.841818   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:31.841873   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.850409   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:31.850471   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:31.891932   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:31.891959   74141 cri.go:89] found id: ""
	I1105 19:15:31.891969   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:31.892026   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.896065   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:31.896125   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.932759   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:31.932781   74141 cri.go:89] found id: ""
	I1105 19:15:31.932788   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:31.932831   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.936611   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:31.936685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:31.971296   74141 cri.go:89] found id: ""
	I1105 19:15:31.971328   74141 logs.go:282] 0 containers: []
	W1105 19:15:31.971339   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:31.971348   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:31.971410   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:32.006153   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:32.006173   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.006177   74141 cri.go:89] found id: ""
	I1105 19:15:32.006184   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:32.006226   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.010159   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.013807   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.013831   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.084222   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:32.084273   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:32.127875   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:32.127928   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:32.173008   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:32.173041   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:32.235366   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.235402   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.714822   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:32.714861   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.750733   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.750761   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.796233   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.796264   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.809269   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.809296   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:32.931162   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:32.931196   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:32.968551   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:32.968578   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:33.008115   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:33.008152   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:33.046201   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:33.046237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:31.346652   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:33.347118   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:32.658958   74485 cri.go:89] found id: ""
	I1105 19:15:32.659006   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.659018   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:32.659026   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:32.659091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:32.694317   74485 cri.go:89] found id: ""
	I1105 19:15:32.694341   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.694349   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:32.694354   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:32.694403   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:32.728277   74485 cri.go:89] found id: ""
	I1105 19:15:32.728314   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.728327   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:32.728338   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.728352   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.815579   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.815615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.856776   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.856807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.909477   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.909518   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.923789   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.923817   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:32.997898   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:35.498040   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:35.511537   74485 kubeadm.go:597] duration metric: took 4m4.46832509s to restartPrimaryControlPlane
	W1105 19:15:35.511612   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:35.511644   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:35.586678   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:15:35.591512   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:15:35.592489   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:15:35.592507   74141 api_server.go:131] duration metric: took 3.908440367s to wait for apiserver health ...
	I1105 19:15:35.592514   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:15:35.592538   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:35.592589   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:35.636389   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.636408   74141 cri.go:89] found id: ""
	I1105 19:15:35.636416   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:35.636463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.640778   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:35.640839   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:35.676793   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:35.676818   74141 cri.go:89] found id: ""
	I1105 19:15:35.676828   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:35.676890   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.681596   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:35.681669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:35.721728   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:35.721754   74141 cri.go:89] found id: ""
	I1105 19:15:35.721763   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:35.721808   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.725619   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:35.725677   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:35.765348   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:35.765377   74141 cri.go:89] found id: ""
	I1105 19:15:35.765386   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:35.765439   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.769594   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:35.769669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:35.809427   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:35.809452   74141 cri.go:89] found id: ""
	I1105 19:15:35.809460   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:35.809505   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.814317   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:35.814376   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:35.853861   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:35.853882   74141 cri.go:89] found id: ""
	I1105 19:15:35.853890   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:35.853934   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.857734   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:35.857787   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:35.897791   74141 cri.go:89] found id: ""
	I1105 19:15:35.897816   74141 logs.go:282] 0 containers: []
	W1105 19:15:35.897824   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:35.897830   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:35.897887   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:35.940906   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:35.940940   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:35.940946   74141 cri.go:89] found id: ""
	I1105 19:15:35.940954   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:35.941006   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.945200   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.948860   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:35.948884   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.992660   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:35.992690   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:36.033586   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:36.033617   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:36.066599   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:36.066643   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:36.104895   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:36.104932   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:36.489747   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:36.489781   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:36.531923   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:36.531952   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:36.598718   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:36.598758   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:36.612969   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:36.612998   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:36.718535   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:36.718568   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:36.755636   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:36.755677   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:36.815561   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:36.815640   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:36.850878   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:36.850904   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:39.390699   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:15:39.390733   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.390738   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.390743   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.390747   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.390750   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.390753   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.390760   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.390764   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.390771   74141 system_pods.go:74] duration metric: took 3.798251189s to wait for pod list to return data ...
	I1105 19:15:39.390777   74141 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:15:39.393894   74141 default_sa.go:45] found service account: "default"
	I1105 19:15:39.393914   74141 default_sa.go:55] duration metric: took 3.132788ms for default service account to be created ...
	I1105 19:15:39.393929   74141 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:15:39.398455   74141 system_pods.go:86] 8 kube-system pods found
	I1105 19:15:39.398480   74141 system_pods.go:89] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.398485   74141 system_pods.go:89] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.398490   74141 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.398494   74141 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.398497   74141 system_pods.go:89] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.398501   74141 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.398508   74141 system_pods.go:89] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.398512   74141 system_pods.go:89] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.398520   74141 system_pods.go:126] duration metric: took 4.586494ms to wait for k8s-apps to be running ...
	I1105 19:15:39.398529   74141 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:15:39.398569   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.413878   74141 system_svc.go:56] duration metric: took 15.340417ms WaitForService to wait for kubelet
	I1105 19:15:39.413908   74141 kubeadm.go:582] duration metric: took 4m20.225910976s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:15:39.413936   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:15:39.416851   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:15:39.416870   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:15:39.416880   74141 node_conditions.go:105] duration metric: took 2.939584ms to run NodePressure ...
	I1105 19:15:39.416891   74141 start.go:241] waiting for startup goroutines ...
	I1105 19:15:39.416899   74141 start.go:246] waiting for cluster config update ...
	I1105 19:15:39.416911   74141 start.go:255] writing updated cluster config ...
	I1105 19:15:39.417211   74141 ssh_runner.go:195] Run: rm -f paused
	I1105 19:15:39.463773   74141 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:15:39.465688   74141 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-608095" cluster and "default" namespace by default
	I1105 19:15:39.702249   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.19058336s)
	I1105 19:15:39.702314   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.717966   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:39.728114   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:39.740451   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:39.740476   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:39.740519   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:39.751089   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:39.751150   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:39.761832   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:39.771841   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:39.771904   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:39.782332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.792379   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:39.792438   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.801625   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:39.811691   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:39.811740   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:39.821162   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:39.891377   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:15:39.891443   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:40.034176   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:40.034337   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:40.034476   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:15:40.211588   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:35.847491   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:38.346965   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.348252   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.213724   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:40.213838   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:40.213939   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:40.214048   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:40.214172   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:40.214266   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:40.214375   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:40.214478   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:40.214567   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:40.214687   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:40.214819   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:40.214884   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:40.214980   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:40.358606   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:40.632263   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:40.766570   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:40.885914   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:40.902379   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:40.903647   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:40.903716   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:41.040274   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:41.042093   74485 out.go:235]   - Booting up control plane ...
	I1105 19:15:41.042222   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:41.048448   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:41.058445   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:41.059466   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:41.062648   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:15:38.649673   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193536212s)
	I1105 19:15:38.649753   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:38.665214   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:38.674520   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:38.684078   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:38.684102   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:38.684151   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:38.693169   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:38.693239   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:38.702305   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:38.710796   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:38.710868   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:38.719716   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.728090   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:38.728143   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.737219   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:38.745625   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:38.745692   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:38.754684   73732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:38.914343   73732 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:15:42.847011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:44.851431   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:47.368221   73732 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:15:47.368296   73732 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:47.368405   73732 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:47.368552   73732 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:47.368686   73732 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:15:47.368787   73732 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:47.370333   73732 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:47.370429   73732 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:47.370529   73732 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:47.370650   73732 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:47.370763   73732 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:47.370900   73732 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:47.371009   73732 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:47.371110   73732 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:47.371198   73732 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:47.371312   73732 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:47.371431   73732 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:47.371494   73732 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:47.371573   73732 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:47.371656   73732 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:47.371725   73732 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:15:47.371797   73732 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:47.371893   73732 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:47.371976   73732 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:47.372074   73732 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:47.372160   73732 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:47.374386   73732 out.go:235]   - Booting up control plane ...
	I1105 19:15:47.374503   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:47.374622   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:47.374707   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:47.374838   73732 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:47.374950   73732 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:47.375046   73732 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:47.375226   73732 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:15:47.375367   73732 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:15:47.375450   73732 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.124171ms
	I1105 19:15:47.375549   73732 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:15:47.375647   73732 kubeadm.go:310] [api-check] The API server is healthy after 5.001431223s
	I1105 19:15:47.375804   73732 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:15:47.375968   73732 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:15:47.376055   73732 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:15:47.376321   73732 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-271881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:15:47.376412   73732 kubeadm.go:310] [bootstrap-token] Using token: 2xak8n.owgv6oncwawjarav
	I1105 19:15:47.377766   73732 out.go:235]   - Configuring RBAC rules ...
	I1105 19:15:47.377911   73732 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:15:47.378024   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:15:47.378138   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:15:47.378243   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:15:47.378337   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:15:47.378408   73732 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:15:47.378502   73732 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:15:47.378541   73732 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:15:47.378580   73732 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:15:47.378587   73732 kubeadm.go:310] 
	I1105 19:15:47.378635   73732 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:15:47.378645   73732 kubeadm.go:310] 
	I1105 19:15:47.378711   73732 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:15:47.378718   73732 kubeadm.go:310] 
	I1105 19:15:47.378760   73732 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:15:47.378813   73732 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:15:47.378856   73732 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:15:47.378860   73732 kubeadm.go:310] 
	I1105 19:15:47.378910   73732 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:15:47.378913   73732 kubeadm.go:310] 
	I1105 19:15:47.378955   73732 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:15:47.378959   73732 kubeadm.go:310] 
	I1105 19:15:47.379030   73732 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:15:47.379114   73732 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:15:47.379195   73732 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:15:47.379203   73732 kubeadm.go:310] 
	I1105 19:15:47.379320   73732 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:15:47.379427   73732 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:15:47.379442   73732 kubeadm.go:310] 
	I1105 19:15:47.379559   73732 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.379718   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:15:47.379762   73732 kubeadm.go:310] 	--control-plane 
	I1105 19:15:47.379770   73732 kubeadm.go:310] 
	I1105 19:15:47.379844   73732 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:15:47.379851   73732 kubeadm.go:310] 
	I1105 19:15:47.379977   73732 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.380150   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:15:47.380167   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:15:47.380174   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:15:47.381714   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:15:47.382944   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:15:47.394080   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:15:47.411715   73732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:15:47.411773   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.411821   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-271881 minikube.k8s.io/updated_at=2024_11_05T19_15_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=embed-certs-271881 minikube.k8s.io/primary=true
	I1105 19:15:47.439084   73732 ops.go:34] apiserver oom_adj: -16
	I1105 19:15:47.601691   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.348094   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:49.847296   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:48.102103   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:48.602767   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.101780   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.601826   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.101976   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.602763   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.102779   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.601930   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.102574   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.241636   73732 kubeadm.go:1113] duration metric: took 4.829922813s to wait for elevateKubeSystemPrivileges
	I1105 19:15:52.241680   73732 kubeadm.go:394] duration metric: took 5m2.866246993s to StartCluster
	I1105 19:15:52.241704   73732 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.241801   73732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:15:52.244409   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.244716   73732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:15:52.244789   73732 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:15:52.244893   73732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-271881"
	I1105 19:15:52.244914   73732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-271881"
	I1105 19:15:52.244911   73732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-271881"
	I1105 19:15:52.244933   73732 addons.go:69] Setting metrics-server=true in profile "embed-certs-271881"
	I1105 19:15:52.244941   73732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-271881"
	I1105 19:15:52.244954   73732 addons.go:234] Setting addon metrics-server=true in "embed-certs-271881"
	W1105 19:15:52.244965   73732 addons.go:243] addon metrics-server should already be in state true
	I1105 19:15:52.244998   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1105 19:15:52.244925   73732 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:15:52.245001   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245065   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245404   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245422   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245436   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245455   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245464   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245543   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.246341   73732 out.go:177] * Verifying Kubernetes components...
	I1105 19:15:52.247801   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:15:52.261802   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I1105 19:15:52.262325   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.262955   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.263159   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.263591   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.264367   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.264413   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.265696   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I1105 19:15:52.265941   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I1105 19:15:52.266161   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266322   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266776   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266782   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266800   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.266803   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.267185   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267224   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267353   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.267804   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.267846   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.271094   73732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-271881"
	W1105 19:15:52.271117   73732 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:15:52.271147   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.271509   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.271554   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.284180   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I1105 19:15:52.284456   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1105 19:15:52.284703   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.284925   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.285248   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285261   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285355   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285363   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285578   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285727   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285766   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.285862   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.287834   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.288259   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.290341   73732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:15:52.290346   73732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:15:52.290695   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I1105 19:15:52.291040   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.291464   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.291479   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.291776   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.291974   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:15:52.291994   73732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:15:52.292015   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292054   73732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.292067   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:15:52.292079   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292355   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.292400   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.295296   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295650   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.295675   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295701   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295797   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.295969   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296102   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296247   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.296272   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.296305   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.296582   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.296714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296848   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296947   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.314049   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I1105 19:15:52.314561   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.315148   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.315168   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.315884   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.316080   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.318146   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.318465   73732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.318478   73732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:15:52.318496   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.321312   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321825   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.321850   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321885   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.322095   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.322238   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.322397   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.453762   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:15:52.483722   73732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493492   73732 node_ready.go:49] node "embed-certs-271881" has status "Ready":"True"
	I1105 19:15:52.493519   73732 node_ready.go:38] duration metric: took 9.757528ms for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493530   73732 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:52.508208   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:15:52.577925   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.589366   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:15:52.589389   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:15:52.612570   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:15:52.612593   73732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:15:52.645851   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.647686   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:52.647713   73732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:15:52.668865   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:53.246894   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246918   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.246923   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246950   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247230   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247277   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247305   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247323   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247338   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247349   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247331   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247368   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247378   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247710   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247739   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247746   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247779   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247800   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247811   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.269143   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.269165   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.269465   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.269479   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.269483   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.494717   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.494741   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495080   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495100   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495114   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.495123   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495348   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.495394   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495414   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495427   73732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-271881"
	I1105 19:15:53.497126   73732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:15:52.347616   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:54.352434   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:53.498891   73732 addons.go:510] duration metric: took 1.254108253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:15:54.518219   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:57.015647   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:56.846198   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:58.847684   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:59.514759   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:01.514818   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:02.515124   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.515148   73732 pod_ready.go:82] duration metric: took 10.006914802s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.515158   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519864   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.519889   73732 pod_ready.go:82] duration metric: took 4.723101ms for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519900   73732 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524948   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.524970   73732 pod_ready.go:82] duration metric: took 5.063029ms for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524979   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529710   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.529739   73732 pod_ready.go:82] duration metric: took 4.753888ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529750   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534282   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.534301   73732 pod_ready.go:82] duration metric: took 4.544677ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534309   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912364   73732 pod_ready.go:93] pod "kube-proxy-nfxcj" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.912387   73732 pod_ready.go:82] duration metric: took 378.071939ms for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912397   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311793   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:03.311816   73732 pod_ready.go:82] duration metric: took 399.412502ms for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311822   73732 pod_ready.go:39] duration metric: took 10.818282425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:03.311836   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:16:03.311883   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:16:03.327913   73732 api_server.go:72] duration metric: took 11.083157176s to wait for apiserver process to appear ...
	I1105 19:16:03.327947   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:16:03.327968   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:16:03.334499   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:16:03.335530   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:16:03.335550   73732 api_server.go:131] duration metric: took 7.596072ms to wait for apiserver health ...
	I1105 19:16:03.335558   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:16:03.514782   73732 system_pods.go:59] 9 kube-system pods found
	I1105 19:16:03.514813   73732 system_pods.go:61] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.514820   73732 system_pods.go:61] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.514825   73732 system_pods.go:61] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.514830   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.514835   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.514840   73732 system_pods.go:61] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.514844   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.514854   73732 system_pods.go:61] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.514859   73732 system_pods.go:61] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.514868   73732 system_pods.go:74] duration metric: took 179.304519ms to wait for pod list to return data ...
	I1105 19:16:03.514877   73732 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:16:03.712690   73732 default_sa.go:45] found service account: "default"
	I1105 19:16:03.712719   73732 default_sa.go:55] duration metric: took 197.831177ms for default service account to be created ...
	I1105 19:16:03.712731   73732 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:16:03.916858   73732 system_pods.go:86] 9 kube-system pods found
	I1105 19:16:03.916893   73732 system_pods.go:89] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.916902   73732 system_pods.go:89] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.916908   73732 system_pods.go:89] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.916913   73732 system_pods.go:89] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.916918   73732 system_pods.go:89] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.916921   73732 system_pods.go:89] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.916924   73732 system_pods.go:89] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.916934   73732 system_pods.go:89] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.916941   73732 system_pods.go:89] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.916953   73732 system_pods.go:126] duration metric: took 204.215711ms to wait for k8s-apps to be running ...
	I1105 19:16:03.916963   73732 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:16:03.917019   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:03.931369   73732 system_svc.go:56] duration metric: took 14.397556ms WaitForService to wait for kubelet
	I1105 19:16:03.931407   73732 kubeadm.go:582] duration metric: took 11.686653516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:16:03.931454   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:16:04.111904   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:16:04.111928   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:16:04.111937   73732 node_conditions.go:105] duration metric: took 180.475073ms to run NodePressure ...
	I1105 19:16:04.111947   73732 start.go:241] waiting for startup goroutines ...
	I1105 19:16:04.111953   73732 start.go:246] waiting for cluster config update ...
	I1105 19:16:04.111962   73732 start.go:255] writing updated cluster config ...
	I1105 19:16:04.112197   73732 ssh_runner.go:195] Run: rm -f paused
	I1105 19:16:04.158775   73732 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:16:04.160801   73732 out.go:177] * Done! kubectl is now configured to use "embed-certs-271881" cluster and "default" namespace by default
	I1105 19:16:01.346039   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:03.346369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:05.846866   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:08.346383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:10.346570   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:12.347171   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:14.846335   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.840591   73496 pod_ready.go:82] duration metric: took 4m0.000143963s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	E1105 19:16:17.840620   73496 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:16:17.840649   73496 pod_ready.go:39] duration metric: took 4m11.022533189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:17.840682   73496 kubeadm.go:597] duration metric: took 4m18.432062793s to restartPrimaryControlPlane
	W1105 19:16:17.840732   73496 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:16:17.840755   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:16:21.064069   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:16:21.064607   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:21.064798   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:26.065202   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:26.065410   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:36.065932   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:36.066151   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:43.960239   73496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.119460606s)
	I1105 19:16:43.960324   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:43.986199   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:16:43.999287   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:16:44.013653   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:16:44.013675   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:16:44.013718   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:16:44.026073   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:16:44.026140   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:16:44.038723   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:16:44.050880   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:16:44.050957   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:16:44.061696   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.071739   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:16:44.072301   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.084030   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:16:44.093217   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:16:44.093275   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:16:44.102494   73496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:16:44.267623   73496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:16:52.534375   73496 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:16:52.534458   73496 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:16:52.534569   73496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:16:52.534704   73496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:16:52.534834   73496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:16:52.534930   73496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:16:52.536666   73496 out.go:235]   - Generating certificates and keys ...
	I1105 19:16:52.536759   73496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:16:52.536836   73496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:16:52.536911   73496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:16:52.536963   73496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:16:52.537060   73496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:16:52.537145   73496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:16:52.537232   73496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:16:52.537286   73496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:16:52.537361   73496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:16:52.537455   73496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:16:52.537500   73496 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:16:52.537578   73496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:16:52.537648   73496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:16:52.537725   73496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:16:52.537797   73496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:16:52.537905   73496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:16:52.537988   73496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:16:52.538075   73496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:16:52.538136   73496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:16:52.539588   73496 out.go:235]   - Booting up control plane ...
	I1105 19:16:52.539669   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:16:52.539743   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:16:52.539800   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:16:52.539885   73496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:16:52.539987   73496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:16:52.540057   73496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:16:52.540206   73496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:16:52.540300   73496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:16:52.540367   73496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733469ms
	I1105 19:16:52.540447   73496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:16:52.540528   73496 kubeadm.go:310] [api-check] The API server is healthy after 5.001962829s
	I1105 19:16:52.540651   73496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:16:52.540806   73496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:16:52.540899   73496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:16:52.541094   73496 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-459223 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:16:52.541164   73496 kubeadm.go:310] [bootstrap-token] Using token: f0bzzt.jihwqjda853aoxrb
	I1105 19:16:52.543528   73496 out.go:235]   - Configuring RBAC rules ...
	I1105 19:16:52.543658   73496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:16:52.543777   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:16:52.543942   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:16:52.544072   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:16:52.544222   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:16:52.544327   73496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:16:52.544453   73496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:16:52.544493   73496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:16:52.544536   73496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:16:52.544542   73496 kubeadm.go:310] 
	I1105 19:16:52.544593   73496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:16:52.544599   73496 kubeadm.go:310] 
	I1105 19:16:52.544687   73496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:16:52.544701   73496 kubeadm.go:310] 
	I1105 19:16:52.544739   73496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:16:52.544795   73496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:16:52.544855   73496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:16:52.544881   73496 kubeadm.go:310] 
	I1105 19:16:52.544958   73496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:16:52.544971   73496 kubeadm.go:310] 
	I1105 19:16:52.545039   73496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:16:52.545049   73496 kubeadm.go:310] 
	I1105 19:16:52.545111   73496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:16:52.545193   73496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:16:52.545251   73496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:16:52.545257   73496 kubeadm.go:310] 
	I1105 19:16:52.545324   73496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:16:52.545403   73496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:16:52.545409   73496 kubeadm.go:310] 
	I1105 19:16:52.545480   73496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.545605   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:16:52.545638   73496 kubeadm.go:310] 	--control-plane 
	I1105 19:16:52.545648   73496 kubeadm.go:310] 
	I1105 19:16:52.545779   73496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:16:52.545794   73496 kubeadm.go:310] 
	I1105 19:16:52.545903   73496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.546059   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:16:52.546074   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:16:52.546083   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:16:52.548357   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:16:52.549732   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:16:52.560406   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:16:52.577268   73496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:16:52.577334   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:52.577373   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-459223 minikube.k8s.io/updated_at=2024_11_05T19_16_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=no-preload-459223 minikube.k8s.io/primary=true
	I1105 19:16:52.776299   73496 ops.go:34] apiserver oom_adj: -16
	I1105 19:16:52.776456   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.276618   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.777474   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.276726   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.777004   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.276725   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.777410   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.893941   73496 kubeadm.go:1113] duration metric: took 3.316665512s to wait for elevateKubeSystemPrivileges
	I1105 19:16:55.893984   73496 kubeadm.go:394] duration metric: took 4m56.532038314s to StartCluster
	I1105 19:16:55.894007   73496 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.894104   73496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:16:55.896620   73496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.896934   73496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:16:55.897120   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:16:55.897056   73496 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:16:55.897166   73496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-459223"
	I1105 19:16:55.897176   73496 addons.go:69] Setting default-storageclass=true in profile "no-preload-459223"
	I1105 19:16:55.897186   73496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-459223"
	I1105 19:16:55.897193   73496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-459223"
	I1105 19:16:55.897211   73496 addons.go:69] Setting metrics-server=true in profile "no-preload-459223"
	I1105 19:16:55.897231   73496 addons.go:234] Setting addon metrics-server=true in "no-preload-459223"
	W1105 19:16:55.897243   73496 addons.go:243] addon metrics-server should already be in state true
	I1105 19:16:55.897271   73496 host.go:66] Checking if "no-preload-459223" exists ...
	W1105 19:16:55.897195   73496 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:16:55.897323   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.897599   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897642   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897705   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897754   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897711   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897811   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.898341   73496 out.go:177] * Verifying Kubernetes components...
	I1105 19:16:55.899778   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:16:55.914218   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1105 19:16:55.914305   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1105 19:16:55.914726   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.914837   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.915283   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915305   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915391   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915418   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915642   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915757   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915804   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.916323   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.916367   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.916858   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1105 19:16:55.917296   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.917805   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.917832   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.918156   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.918678   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.918720   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.919527   73496 addons.go:234] Setting addon default-storageclass=true in "no-preload-459223"
	W1105 19:16:55.919549   73496 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:16:55.919576   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.919954   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.919996   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.932547   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I1105 19:16:55.933026   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.933588   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.933601   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.933918   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.934153   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.936094   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.937415   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I1105 19:16:55.937800   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.937812   73496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:16:55.938312   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.938324   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.938420   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I1105 19:16:55.938661   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.938816   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.938867   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:16:55.938894   73496 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:16:55.938918   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.939014   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.939350   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.939362   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.939855   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.940281   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.940310   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.940959   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.942661   73496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:16:55.942797   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943216   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.943392   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943422   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.943588   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.943842   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.944078   73496 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:55.944083   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.944096   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:16:55.944114   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.947574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.947767   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.947789   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.948125   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.948249   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.948343   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.948424   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.987691   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I1105 19:16:55.988131   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.988714   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.988739   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.989127   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.989325   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.991207   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.991453   73496 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:55.991472   73496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:16:55.991492   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.994362   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994800   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.994846   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994938   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.995145   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.995315   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.996088   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:56.109142   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:16:56.126382   73496 node_ready.go:35] waiting up to 6m0s for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138050   73496 node_ready.go:49] node "no-preload-459223" has status "Ready":"True"
	I1105 19:16:56.138076   73496 node_ready.go:38] duration metric: took 11.661265ms for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138087   73496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:56.143325   73496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:56.230205   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:16:56.230228   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:16:56.232603   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:56.259360   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:16:56.259388   73496 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:16:56.268694   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:56.321334   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:56.321364   73496 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:16:56.387409   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:57.010417   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010441   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010496   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010522   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010748   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.010795   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010804   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010812   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010818   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010817   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010830   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010838   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010843   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.011143   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011147   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011205   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011221   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.011209   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011298   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074127   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.074148   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.074476   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.074543   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074508   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.135875   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.135898   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136259   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136280   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136278   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136291   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.136308   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136703   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136747   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136757   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136767   73496 addons.go:475] Verifying addon metrics-server=true in "no-preload-459223"
	I1105 19:16:57.138699   73496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:16:56.066834   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:56.067140   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:57.140755   73496 addons.go:510] duration metric: took 1.243699533s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:16:58.154376   73496 pod_ready.go:103] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:17:00.149838   73496 pod_ready.go:93] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:00.149864   73496 pod_ready.go:82] duration metric: took 4.006514005s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:00.149876   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156460   73496 pod_ready.go:93] pod "kube-apiserver-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.156486   73496 pod_ready.go:82] duration metric: took 1.006602068s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156499   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160598   73496 pod_ready.go:93] pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.160618   73496 pod_ready.go:82] duration metric: took 4.110322ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160631   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164461   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.164482   73496 pod_ready.go:82] duration metric: took 3.842329ms for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164492   73496 pod_ready.go:39] duration metric: took 5.026393011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:17:01.164509   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:17:01.164566   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:17:01.183307   73496 api_server.go:72] duration metric: took 5.286331754s to wait for apiserver process to appear ...
	I1105 19:17:01.183338   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:17:01.183357   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:17:01.189083   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:17:01.190439   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:17:01.190469   73496 api_server.go:131] duration metric: took 7.123058ms to wait for apiserver health ...
	I1105 19:17:01.190479   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:17:01.198820   73496 system_pods.go:59] 9 kube-system pods found
	I1105 19:17:01.198854   73496 system_pods.go:61] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198862   73496 system_pods.go:61] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198869   73496 system_pods.go:61] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.198873   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.198879   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.198883   73496 system_pods.go:61] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.198887   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.198893   73496 system_pods.go:61] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.198896   73496 system_pods.go:61] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.198903   73496 system_pods.go:74] duration metric: took 8.418414ms to wait for pod list to return data ...
	I1105 19:17:01.198913   73496 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:17:01.202229   73496 default_sa.go:45] found service account: "default"
	I1105 19:17:01.202251   73496 default_sa.go:55] duration metric: took 3.332652ms for default service account to be created ...
	I1105 19:17:01.202260   73496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:17:01.208774   73496 system_pods.go:86] 9 kube-system pods found
	I1105 19:17:01.208803   73496 system_pods.go:89] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208811   73496 system_pods.go:89] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208817   73496 system_pods.go:89] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.208821   73496 system_pods.go:89] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.208825   73496 system_pods.go:89] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.208828   73496 system_pods.go:89] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.208833   73496 system_pods.go:89] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.208838   73496 system_pods.go:89] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.208842   73496 system_pods.go:89] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.208848   73496 system_pods.go:126] duration metric: took 6.584071ms to wait for k8s-apps to be running ...
	I1105 19:17:01.208856   73496 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:17:01.208898   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:01.225005   73496 system_svc.go:56] duration metric: took 16.138051ms WaitForService to wait for kubelet
	I1105 19:17:01.225038   73496 kubeadm.go:582] duration metric: took 5.328067688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:17:01.225062   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:17:01.347771   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:17:01.347799   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:17:01.347813   73496 node_conditions.go:105] duration metric: took 122.746343ms to run NodePressure ...
	I1105 19:17:01.347826   73496 start.go:241] waiting for startup goroutines ...
	I1105 19:17:01.347834   73496 start.go:246] waiting for cluster config update ...
	I1105 19:17:01.347846   73496 start.go:255] writing updated cluster config ...
	I1105 19:17:01.348126   73496 ssh_runner.go:195] Run: rm -f paused
	I1105 19:17:01.396396   73496 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:17:01.398528   73496 out.go:177] * Done! kubectl is now configured to use "no-preload-459223" cluster and "default" namespace by default
	I1105 19:17:36.069129   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:17:36.069396   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:17:36.069426   74485 kubeadm.go:310] 
	I1105 19:17:36.069489   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:17:36.069572   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:17:36.069591   74485 kubeadm.go:310] 
	I1105 19:17:36.069638   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:17:36.069699   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:17:36.069843   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:17:36.069852   74485 kubeadm.go:310] 
	I1105 19:17:36.069967   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:17:36.070017   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:17:36.070067   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:17:36.070074   74485 kubeadm.go:310] 
	I1105 19:17:36.070216   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:17:36.070328   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:17:36.070345   74485 kubeadm.go:310] 
	I1105 19:17:36.070486   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:17:36.070622   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:17:36.070690   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:17:36.070758   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:17:36.070767   74485 kubeadm.go:310] 
	I1105 19:17:36.071471   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:17:36.071558   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:17:36.071652   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1105 19:17:36.071791   74485 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1105 19:17:36.071838   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:17:36.527864   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:36.543211   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:17:36.552656   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:17:36.552676   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:17:36.552734   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:17:36.562296   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:17:36.562360   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:17:36.571759   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:17:36.580534   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:17:36.580586   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:17:36.590320   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.599165   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:17:36.599235   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.608340   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:17:36.616935   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:17:36.616986   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:17:36.625948   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:17:36.843267   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:19:32.770686   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:19:32.770828   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1105 19:19:32.772504   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:19:32.772564   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:19:32.772656   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:19:32.772784   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:19:32.772893   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:19:32.772971   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:19:32.774648   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:19:32.774726   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:19:32.774804   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:19:32.774902   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:19:32.775012   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:19:32.775144   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:19:32.775223   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:19:32.775307   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:19:32.775397   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:19:32.775487   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:19:32.775597   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:19:32.775651   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:19:32.775728   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:19:32.775796   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:19:32.775864   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:19:32.775961   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:19:32.776041   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:19:32.776175   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:19:32.776281   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:19:32.776330   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:19:32.776417   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:19:32.777837   74485 out.go:235]   - Booting up control plane ...
	I1105 19:19:32.777940   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:19:32.778032   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:19:32.778134   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:19:32.778248   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:19:32.778489   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:19:32.778563   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:19:32.778652   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.778960   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779080   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779302   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779399   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779663   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779766   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779990   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780051   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.780241   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780260   74485 kubeadm.go:310] 
	I1105 19:19:32.780325   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:19:32.780381   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:19:32.780391   74485 kubeadm.go:310] 
	I1105 19:19:32.780438   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:19:32.780486   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:19:32.780627   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:19:32.780639   74485 kubeadm.go:310] 
	I1105 19:19:32.780748   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:19:32.780790   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:19:32.780819   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:19:32.780825   74485 kubeadm.go:310] 
	I1105 19:19:32.780961   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:19:32.781048   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:19:32.781055   74485 kubeadm.go:310] 
	I1105 19:19:32.781144   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:19:32.781225   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:19:32.781293   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:19:32.781394   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:19:32.781475   74485 kubeadm.go:394] duration metric: took 8m1.792270232s to StartCluster
	I1105 19:19:32.781485   74485 kubeadm.go:310] 
	I1105 19:19:32.781522   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:19:32.781589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:19:32.825435   74485 cri.go:89] found id: ""
	I1105 19:19:32.825465   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.825475   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:19:32.825482   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:19:32.825543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:19:32.859245   74485 cri.go:89] found id: ""
	I1105 19:19:32.859275   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.859286   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:19:32.859293   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:19:32.859355   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:19:32.890801   74485 cri.go:89] found id: ""
	I1105 19:19:32.890833   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.890844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:19:32.890851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:19:32.890919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:19:32.925244   74485 cri.go:89] found id: ""
	I1105 19:19:32.925273   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.925280   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:19:32.925287   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:19:32.925352   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:19:32.959091   74485 cri.go:89] found id: ""
	I1105 19:19:32.959118   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.959129   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:19:32.959137   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:19:32.959191   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:19:32.990230   74485 cri.go:89] found id: ""
	I1105 19:19:32.990264   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.990276   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:19:32.990284   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:19:32.990343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:19:33.027461   74485 cri.go:89] found id: ""
	I1105 19:19:33.027494   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.027505   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:19:33.027512   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:19:33.027574   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:19:33.070819   74485 cri.go:89] found id: ""
	I1105 19:19:33.070847   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.070858   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:19:33.070869   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:19:33.070883   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:19:33.122580   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:19:33.122615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:19:33.136015   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:19:33.136043   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:19:33.213727   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:19:33.213750   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:19:33.213762   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:19:33.324287   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:19:33.324333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1105 19:19:33.384732   74485 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1105 19:19:33.384785   74485 out.go:270] * 
	W1105 19:19:33.384844   74485 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.384857   74485 out.go:270] * 
	W1105 19:19:33.385632   74485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:19:33.388860   74485 out.go:201] 
	W1105 19:19:33.390328   74485 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.390366   74485 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1105 19:19:33.390393   74485 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1105 19:19:33.391785   74485 out.go:201] 
	
	
	==> CRI-O <==
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.156273662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834706156244705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecaea746-e966-4399-84b7-bfbf8a33462f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.156736200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=196035d0-d50c-482b-af1e-ffc04579cec1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.156796757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=196035d0-d50c-482b-af1e-ffc04579cec1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.157013367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0,PodSandboxId:d69437d54370af4580791d3d753e1371d85d9752b1d1000ab44d0c2232253123,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834154109601835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7dk86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170744f6-4b55-458d-a270-a8aa397c9cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62,PodSandboxId:5d590ebf53919034d806b1024bc11f193002bc1e833057cb3eb94f01f5e56977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730834153690549009,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 18a73546-576b-456e-9a91-a2a0d62880dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba,PodSandboxId:c4bf692d61b153ed23814c745b9be5d711943775633998f432e02bbbaac87237,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834153587148495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5vt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
be11308-47aa-454a-97bd-5e6c5145a99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a,PodSandboxId:23b8970401e7c1787f39c54f1afcfb2c4ffcc722a76d2dd726ce6aa6b52378ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730834152503786963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfxcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2910ec66-6528-4d00-91c0-588a93c54fcf,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9,PodSandboxId:ef4acb1221f6a191b1c550dddd4e330cbfa974491e7b885aeaaba7ff5b893ddc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730834141613931550,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860,PodSandboxId:5016aebbc1d6e44815d2a5a4cb176c24a2f02b6c471dead00c77ea2eb99a8b92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834141603531874,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a769b2c76c113f78e91812c836a9eeb3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24,PodSandboxId:d4bf9cb5df4bb8a122b1efc82206c3f5c2966b6eacf1a8cd72fa553f292f4d77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834141575994715,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c58f0c75dfcc12e2af2accc238b8f92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330,PodSandboxId:71c1d456dadcf077db65209420df38de9f2365d82e2c9f06bf1c0956fd1ff647,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834141516195318,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0de3c0318a9966d4c33dc7446e4e43c9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2930f9215487586727bd9eca76ad45143df71801516be459220e5ff8b75a38a,PodSandboxId:30ffff2e57828a95015778a477406d68377bd862f4d682ab3bccf27942f2fec1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833852785620668,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=196035d0-d50c-482b-af1e-ffc04579cec1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.197787980Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db5649cb-83e4-4224-ae8c-72297b14f25d name=/runtime.v1.RuntimeService/Version
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.197861685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db5649cb-83e4-4224-ae8c-72297b14f25d name=/runtime.v1.RuntimeService/Version
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.198919417Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbecaa79-0105-40d2-b408-233494569d3b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.199393363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834706199346869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbecaa79-0105-40d2-b408-233494569d3b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.199874584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c19f3057-2cc2-4694-9ee8-ef520bad566d name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.199949184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c19f3057-2cc2-4694-9ee8-ef520bad566d name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.200217342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0,PodSandboxId:d69437d54370af4580791d3d753e1371d85d9752b1d1000ab44d0c2232253123,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834154109601835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7dk86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170744f6-4b55-458d-a270-a8aa397c9cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62,PodSandboxId:5d590ebf53919034d806b1024bc11f193002bc1e833057cb3eb94f01f5e56977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730834153690549009,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 18a73546-576b-456e-9a91-a2a0d62880dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba,PodSandboxId:c4bf692d61b153ed23814c745b9be5d711943775633998f432e02bbbaac87237,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834153587148495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5vt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
be11308-47aa-454a-97bd-5e6c5145a99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a,PodSandboxId:23b8970401e7c1787f39c54f1afcfb2c4ffcc722a76d2dd726ce6aa6b52378ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730834152503786963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfxcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2910ec66-6528-4d00-91c0-588a93c54fcf,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9,PodSandboxId:ef4acb1221f6a191b1c550dddd4e330cbfa974491e7b885aeaaba7ff5b893ddc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730834141613931550,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860,PodSandboxId:5016aebbc1d6e44815d2a5a4cb176c24a2f02b6c471dead00c77ea2eb99a8b92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834141603531874,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a769b2c76c113f78e91812c836a9eeb3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24,PodSandboxId:d4bf9cb5df4bb8a122b1efc82206c3f5c2966b6eacf1a8cd72fa553f292f4d77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834141575994715,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c58f0c75dfcc12e2af2accc238b8f92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330,PodSandboxId:71c1d456dadcf077db65209420df38de9f2365d82e2c9f06bf1c0956fd1ff647,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834141516195318,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0de3c0318a9966d4c33dc7446e4e43c9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2930f9215487586727bd9eca76ad45143df71801516be459220e5ff8b75a38a,PodSandboxId:30ffff2e57828a95015778a477406d68377bd862f4d682ab3bccf27942f2fec1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833852785620668,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c19f3057-2cc2-4694-9ee8-ef520bad566d name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.238897869Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f9b5f31-36fc-4490-8a0f-91f346702929 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.238983279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f9b5f31-36fc-4490-8a0f-91f346702929 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.240346175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e8b9df7-d986-4835-89e3-bde2b4ae4455 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.240764419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834706240741431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e8b9df7-d986-4835-89e3-bde2b4ae4455 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.241418693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da27b18e-1d05-44c3-93fa-a562dd0f52f8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.241486957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da27b18e-1d05-44c3-93fa-a562dd0f52f8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.241755313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0,PodSandboxId:d69437d54370af4580791d3d753e1371d85d9752b1d1000ab44d0c2232253123,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834154109601835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7dk86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170744f6-4b55-458d-a270-a8aa397c9cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62,PodSandboxId:5d590ebf53919034d806b1024bc11f193002bc1e833057cb3eb94f01f5e56977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730834153690549009,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 18a73546-576b-456e-9a91-a2a0d62880dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba,PodSandboxId:c4bf692d61b153ed23814c745b9be5d711943775633998f432e02bbbaac87237,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834153587148495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5vt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
be11308-47aa-454a-97bd-5e6c5145a99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a,PodSandboxId:23b8970401e7c1787f39c54f1afcfb2c4ffcc722a76d2dd726ce6aa6b52378ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730834152503786963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfxcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2910ec66-6528-4d00-91c0-588a93c54fcf,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9,PodSandboxId:ef4acb1221f6a191b1c550dddd4e330cbfa974491e7b885aeaaba7ff5b893ddc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730834141613931550,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860,PodSandboxId:5016aebbc1d6e44815d2a5a4cb176c24a2f02b6c471dead00c77ea2eb99a8b92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834141603531874,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a769b2c76c113f78e91812c836a9eeb3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24,PodSandboxId:d4bf9cb5df4bb8a122b1efc82206c3f5c2966b6eacf1a8cd72fa553f292f4d77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834141575994715,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c58f0c75dfcc12e2af2accc238b8f92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330,PodSandboxId:71c1d456dadcf077db65209420df38de9f2365d82e2c9f06bf1c0956fd1ff647,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834141516195318,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0de3c0318a9966d4c33dc7446e4e43c9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2930f9215487586727bd9eca76ad45143df71801516be459220e5ff8b75a38a,PodSandboxId:30ffff2e57828a95015778a477406d68377bd862f4d682ab3bccf27942f2fec1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833852785620668,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da27b18e-1d05-44c3-93fa-a562dd0f52f8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.277572145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e46d5d8-ba8d-4663-bfc2-e51019370896 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.277672727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e46d5d8-ba8d-4663-bfc2-e51019370896 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.278850305Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe51fdd9-2ffe-4c83-9f84-6aeb6c382431 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.279312324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834706279284872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe51fdd9-2ffe-4c83-9f84-6aeb6c382431 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.279913256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=770541f2-fee4-44f7-b9d4-5786104df650 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.279974598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=770541f2-fee4-44f7-b9d4-5786104df650 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:25:06 embed-certs-271881 crio[717]: time="2024-11-05 19:25:06.280233863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0,PodSandboxId:d69437d54370af4580791d3d753e1371d85d9752b1d1000ab44d0c2232253123,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834154109601835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7dk86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170744f6-4b55-458d-a270-a8aa397c9cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62,PodSandboxId:5d590ebf53919034d806b1024bc11f193002bc1e833057cb3eb94f01f5e56977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730834153690549009,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 18a73546-576b-456e-9a91-a2a0d62880dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba,PodSandboxId:c4bf692d61b153ed23814c745b9be5d711943775633998f432e02bbbaac87237,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834153587148495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5vt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
be11308-47aa-454a-97bd-5e6c5145a99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a,PodSandboxId:23b8970401e7c1787f39c54f1afcfb2c4ffcc722a76d2dd726ce6aa6b52378ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730834152503786963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfxcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2910ec66-6528-4d00-91c0-588a93c54fcf,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9,PodSandboxId:ef4acb1221f6a191b1c550dddd4e330cbfa974491e7b885aeaaba7ff5b893ddc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730834141613931550,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860,PodSandboxId:5016aebbc1d6e44815d2a5a4cb176c24a2f02b6c471dead00c77ea2eb99a8b92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834141603531874,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a769b2c76c113f78e91812c836a9eeb3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24,PodSandboxId:d4bf9cb5df4bb8a122b1efc82206c3f5c2966b6eacf1a8cd72fa553f292f4d77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834141575994715,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c58f0c75dfcc12e2af2accc238b8f92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330,PodSandboxId:71c1d456dadcf077db65209420df38de9f2365d82e2c9f06bf1c0956fd1ff647,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834141516195318,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0de3c0318a9966d4c33dc7446e4e43c9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2930f9215487586727bd9eca76ad45143df71801516be459220e5ff8b75a38a,PodSandboxId:30ffff2e57828a95015778a477406d68377bd862f4d682ab3bccf27942f2fec1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833852785620668,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=770541f2-fee4-44f7-b9d4-5786104df650 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8d76c3e72e03c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   d69437d54370a       coredns-7c65d6cfc9-7dk86
	da920711eafbb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5d590ebf53919       storage-provisioner
	ac3f242769735       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   c4bf692d61b15       coredns-7c65d6cfc9-v5vt6
	ff003c2d0bf73       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   23b8970401e7c       kube-proxy-nfxcj
	e7a67250a75d4       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   ef4acb1221f6a       kube-apiserver-embed-certs-271881
	bb4479cf128df       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   5016aebbc1d6e       kube-scheduler-embed-certs-271881
	bfdf7a59551e2       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   d4bf9cb5df4bb       kube-controller-manager-embed-certs-271881
	fa1edb4a8395e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   71c1d456dadcf       etcd-embed-certs-271881
	d2930f9215487       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   30ffff2e57828       kube-apiserver-embed-certs-271881
	
	
	==> coredns [8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-271881
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-271881
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=embed-certs-271881
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T19_15_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 19:15:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-271881
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 19:24:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 19:21:02 +0000   Tue, 05 Nov 2024 19:15:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 19:21:02 +0000   Tue, 05 Nov 2024 19:15:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 19:21:02 +0000   Tue, 05 Nov 2024 19:15:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 19:21:02 +0000   Tue, 05 Nov 2024 19:15:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    embed-certs-271881
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c03b11c8707426ab3b2acfa01fb5b0f
	  System UUID:                3c03b11c-8707-426a-b3b2-acfa01fb5b0f
	  Boot ID:                    d74b63b1-c0ce-4b62-8afd-2efa3b575194
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7dk86                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-7c65d6cfc9-v5vt6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-embed-certs-271881                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m20s
	  kube-system                 kube-apiserver-embed-certs-271881             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-271881    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-nfxcj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-embed-certs-271881             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 metrics-server-6867b74b74-tvl8v               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m26s)  kubelet          Node embed-certs-271881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m26s)  kubelet          Node embed-certs-271881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m26s)  kubelet          Node embed-certs-271881 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-271881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-271881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-271881 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s                  node-controller  Node embed-certs-271881 event: Registered Node embed-certs-271881 in Controller
	
	
	==> dmesg <==
	[  +0.051362] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.844763] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968429] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.527862] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.014342] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.061423] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073311] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.206820] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.145906] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.300218] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +3.972273] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +2.369719] systemd-fstab-generator[919]: Ignoring "noauto" option for root device
	[  +0.060546] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.537294] kauditd_printk_skb: 69 callbacks suppressed
	[Nov 5 19:11] kauditd_printk_skb: 85 callbacks suppressed
	[Nov 5 19:15] kauditd_printk_skb: 3 callbacks suppressed
	[  +2.087594] systemd-fstab-generator[2587]: Ignoring "noauto" option for root device
	[  +4.629052] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.443249] systemd-fstab-generator[2907]: Ignoring "noauto" option for root device
	[  +5.892741] systemd-fstab-generator[3027]: Ignoring "noauto" option for root device
	[  +0.097767] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.786644] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330] <==
	{"level":"info","ts":"2024-11-05T19:15:41.856729Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T19:15:41.856753Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T19:15:41.856828Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-11-05T19:15:41.856854Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-11-05T19:15:41.857334Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","added-peer-id":"ded7f9817c909548","added-peer-peer-urls":["https://192.168.39.58:2380"]}
	{"level":"info","ts":"2024-11-05T19:15:42.788136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 is starting a new election at term 1"}
	{"level":"info","ts":"2024-11-05T19:15:42.788227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-11-05T19:15:42.788260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgPreVoteResp from ded7f9817c909548 at term 1"}
	{"level":"info","ts":"2024-11-05T19:15:42.788277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became candidate at term 2"}
	{"level":"info","ts":"2024-11-05T19:15:42.788285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgVoteResp from ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-11-05T19:15:42.788296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became leader at term 2"}
	{"level":"info","ts":"2024-11-05T19:15:42.788307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ded7f9817c909548 elected leader ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-11-05T19:15:42.792258Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:15:42.794394Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ded7f9817c909548","local-member-attributes":"{Name:embed-certs-271881 ClientURLs:[https://192.168.39.58:2379]}","request-path":"/0/members/ded7f9817c909548/attributes","cluster-id":"91c640bc00cd2aea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T19:15:42.797091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:15:42.797434Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:15:42.797610Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:15:42.797713Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:15:42.797754Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:15:42.800441Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:15:42.803269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-05T19:15:42.813573Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:15:42.816333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.58:2379"}
	{"level":"info","ts":"2024-11-05T19:15:42.816438Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T19:15:42.816465Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:25:06 up 14 min,  0 users,  load average: 0.15, 0.26, 0.18
	Linux embed-certs-271881 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d2930f9215487586727bd9eca76ad45143df71801516be459220e5ff8b75a38a] <==
	W1105 19:15:33.375683       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.392376       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.437009       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.442633       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.485861       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.516653       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.607332       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.746746       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.826314       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.140853       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.267607       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.391618       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.554531       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.584115       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.704758       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.734485       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.767702       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.820430       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.871270       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.927406       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.928813       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.971637       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:38.115410       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:38.169410       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:38.190606       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9] <==
	W1105 19:20:45.373414       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:20:45.373515       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:20:45.374524       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:20:45.374566       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:21:45.375043       1 handler_proxy.go:99] no RequestInfo found in the context
	W1105 19:21:45.375132       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:21:45.375400       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1105 19:21:45.375418       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:21:45.376592       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:21:45.376626       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:23:45.376979       1 handler_proxy.go:99] no RequestInfo found in the context
	W1105 19:23:45.377006       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:23:45.377423       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1105 19:23:45.377456       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:23:45.378637       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:23:45.378701       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24] <==
	E1105 19:19:51.371245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:19:51.791498       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:20:21.376715       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:20:21.798691       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:20:51.382973       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:20:51.806374       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:21:02.229350       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-271881"
	E1105 19:21:21.390415       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:21:21.814677       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:21:40.751406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="410.154µs"
	E1105 19:21:51.397373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:21:51.821899       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:21:55.744737       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="96.783µs"
	E1105 19:22:21.403348       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:22:21.830014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:22:51.409696       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:22:51.837607       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:23:21.415868       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:23:21.845886       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:23:51.423692       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:23:51.853980       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:24:21.429620       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:24:21.862413       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:24:51.436261       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:24:51.869636       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 19:15:52.935446       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 19:15:52.965631       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E1105 19:15:52.965712       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 19:15:53.052204       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 19:15:53.052253       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 19:15:53.052313       1 server_linux.go:169] "Using iptables Proxier"
	I1105 19:15:53.055383       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 19:15:53.055733       1 server.go:483] "Version info" version="v1.31.2"
	I1105 19:15:53.055763       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:15:53.057672       1 config.go:199] "Starting service config controller"
	I1105 19:15:53.057703       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 19:15:53.057736       1 config.go:105] "Starting endpoint slice config controller"
	I1105 19:15:53.057742       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 19:15:53.062658       1 config.go:328] "Starting node config controller"
	I1105 19:15:53.062690       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 19:15:53.159347       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 19:15:53.159399       1 shared_informer.go:320] Caches are synced for service config
	I1105 19:15:53.165523       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860] <==
	W1105 19:15:44.418132       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1105 19:15:44.418156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:44.419673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 19:15:44.419711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:44.419770       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1105 19:15:44.419793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:44.419837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 19:15:44.419857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:44.419925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 19:15:44.419948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.305783       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 19:15:45.305829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.362906       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 19:15:45.362961       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1105 19:15:45.385488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 19:15:45.385614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.400602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 19:15:45.400722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.467673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 19:15:45.467875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.534873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 19:15:45.535004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.546258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 19:15:45.546385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1105 19:15:48.006373       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 19:24:04 embed-certs-271881 kubelet[2914]: E1105 19:24:04.732304    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:24:06 embed-certs-271881 kubelet[2914]: E1105 19:24:06.865855    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834646865607486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:06 embed-certs-271881 kubelet[2914]: E1105 19:24:06.865895    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834646865607486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:16 embed-certs-271881 kubelet[2914]: E1105 19:24:16.868808    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834656868348197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:16 embed-certs-271881 kubelet[2914]: E1105 19:24:16.868849    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834656868348197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:19 embed-certs-271881 kubelet[2914]: E1105 19:24:19.732019    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:24:26 embed-certs-271881 kubelet[2914]: E1105 19:24:26.870957    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834666870555158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:26 embed-certs-271881 kubelet[2914]: E1105 19:24:26.871499    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834666870555158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:31 embed-certs-271881 kubelet[2914]: E1105 19:24:31.732500    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:24:36 embed-certs-271881 kubelet[2914]: E1105 19:24:36.872808    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834676872514151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:36 embed-certs-271881 kubelet[2914]: E1105 19:24:36.872832    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834676872514151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:42 embed-certs-271881 kubelet[2914]: E1105 19:24:42.732905    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:24:46 embed-certs-271881 kubelet[2914]: E1105 19:24:46.765718    2914 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 19:24:46 embed-certs-271881 kubelet[2914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 19:24:46 embed-certs-271881 kubelet[2914]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 19:24:46 embed-certs-271881 kubelet[2914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 19:24:46 embed-certs-271881 kubelet[2914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 19:24:46 embed-certs-271881 kubelet[2914]: E1105 19:24:46.874375    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834686873979273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:46 embed-certs-271881 kubelet[2914]: E1105 19:24:46.874403    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834686873979273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:53 embed-certs-271881 kubelet[2914]: E1105 19:24:53.732281    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:24:56 embed-certs-271881 kubelet[2914]: E1105 19:24:56.876504    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834696876119394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:24:56 embed-certs-271881 kubelet[2914]: E1105 19:24:56.878037    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834696876119394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:04 embed-certs-271881 kubelet[2914]: E1105 19:25:04.734841    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:25:06 embed-certs-271881 kubelet[2914]: E1105 19:25:06.880698    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834706880294461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:06 embed-certs-271881 kubelet[2914]: E1105 19:25:06.880736    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834706880294461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62] <==
	I1105 19:15:53.864950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 19:15:53.881914       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 19:15:53.882242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 19:15:53.931880       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 19:15:53.932507       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff0ef236-a5af-41c4-bd6f-5115de9de6bb", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-271881_b401f14b-02e0-4c5c-ab66-b1af16c5a036 became leader
	I1105 19:15:53.933296       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-271881_b401f14b-02e0-4c5c-ab66-b1af16c5a036!
	I1105 19:15:54.033736       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-271881_b401f14b-02e0-4c5c-ab66-b1af16c5a036!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-271881 -n embed-certs-271881
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-271881 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tvl8v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-271881 describe pod metrics-server-6867b74b74-tvl8v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-271881 describe pod metrics-server-6867b74b74-tvl8v: exit status 1 (64.4234ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tvl8v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-271881 describe pod metrics-server-6867b74b74-tvl8v: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1105 19:17:10.002076   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:17:21.924946   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:17:31.418959   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:17:55.009329   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:18:01.006785   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:19:06.920827   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:19:18.074686   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:19:24.073446   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-459223 -n no-preload-459223
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-11-05 19:26:01.93414762 +0000 UTC m=+6295.606872320
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-459223 -n no-preload-459223
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-459223 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-459223 logs -n 25: (2.0022483s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-929548 sudo cat                              | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo find                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo crio                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-929548                                       | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-537175 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | disable-driver-mounts-537175                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:04 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-459223             | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-271881            | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-608095  | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC | 05 Nov 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-459223                  | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-271881                 | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-567666        | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-608095       | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:15 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-567666             | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 19:07:52.649090   74485 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:07:52.649200   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649205   74485 out.go:358] Setting ErrFile to fd 2...
	I1105 19:07:52.649210   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649374   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:07:52.649909   74485 out.go:352] Setting JSON to false
	I1105 19:07:52.650785   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6615,"bootTime":1730827058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:07:52.650878   74485 start.go:139] virtualization: kvm guest
	I1105 19:07:52.652866   74485 out.go:177] * [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:07:52.654107   74485 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:07:52.654107   74485 notify.go:220] Checking for updates...
	I1105 19:07:52.655282   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:07:52.656379   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:07:52.657451   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:07:52.658694   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:07:52.659835   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:07:52.661251   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:07:52.661622   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.661660   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.677005   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I1105 19:07:52.677521   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.678096   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.678118   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.678489   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.678735   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.680466   74485 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1105 19:07:52.681734   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:07:52.682087   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.682139   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.697071   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1105 19:07:52.697503   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.697958   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.697980   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.698259   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.698439   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.732962   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 19:07:52.734079   74485 start.go:297] selected driver: kvm2
	I1105 19:07:52.734094   74485 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.734209   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:07:52.734912   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.735038   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:07:52.750214   74485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:07:52.750609   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:07:52.750641   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:07:52.750697   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:07:52.750745   74485 start.go:340] cluster config:
	{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.750877   74485 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.753288   74485 out.go:177] * Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	I1105 19:07:50.739209   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:53.811246   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:52.754354   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:07:52.754407   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 19:07:52.754425   74485 cache.go:56] Caching tarball of preloaded images
	I1105 19:07:52.754503   74485 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:07:52.754515   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 19:07:52.754610   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:07:52.754817   74485 start.go:360] acquireMachinesLock for old-k8s-version-567666: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:07:59.891257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:02.963247   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:09.043263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:12.115289   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:18.195275   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:21.267213   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:27.347251   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:30.419240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:36.499291   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:39.571255   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:45.651258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:48.723262   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:54.803265   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:57.875236   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:03.955241   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:07.027229   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:13.107258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:16.179257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:22.259227   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:25.331263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:31.411234   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:34.483240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:40.563258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:43.635253   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:49.715287   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:52.787276   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:58.867242   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:01.939296   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:08.019268   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:11.091350   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:17.171266   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:20.243245   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:23.247511   73732 start.go:364] duration metric: took 4m30.277290481s to acquireMachinesLock for "embed-certs-271881"
	I1105 19:10:23.247565   73732 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:23.247590   73732 fix.go:54] fixHost starting: 
	I1105 19:10:23.248173   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:23.248235   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:23.263573   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I1105 19:10:23.264016   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:23.264437   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:10:23.264461   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:23.264888   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:23.265122   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:23.265311   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:10:23.267000   73732 fix.go:112] recreateIfNeeded on embed-certs-271881: state=Stopped err=<nil>
	I1105 19:10:23.267031   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	W1105 19:10:23.267183   73732 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:23.269188   73732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-271881" ...
	I1105 19:10:23.244961   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:23.245021   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245327   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:10:23.245352   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245536   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:10:23.247352   73496 machine.go:96] duration metric: took 4m37.425023044s to provisionDockerMachine
	I1105 19:10:23.247393   73496 fix.go:56] duration metric: took 4m37.446801616s for fixHost
	I1105 19:10:23.247400   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 4m37.446835698s
	W1105 19:10:23.247424   73496 start.go:714] error starting host: provision: host is not running
	W1105 19:10:23.247522   73496 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1105 19:10:23.247534   73496 start.go:729] Will try again in 5 seconds ...
	I1105 19:10:23.270443   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Start
	I1105 19:10:23.270681   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring networks are active...
	I1105 19:10:23.271552   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network default is active
	I1105 19:10:23.271924   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network mk-embed-certs-271881 is active
	I1105 19:10:23.272243   73732 main.go:141] libmachine: (embed-certs-271881) Getting domain xml...
	I1105 19:10:23.273027   73732 main.go:141] libmachine: (embed-certs-271881) Creating domain...
	I1105 19:10:24.503219   73732 main.go:141] libmachine: (embed-certs-271881) Waiting to get IP...
	I1105 19:10:24.504067   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.504444   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.504503   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.504415   75020 retry.go:31] will retry after 194.539819ms: waiting for machine to come up
	I1105 19:10:24.701086   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.701552   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.701579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.701501   75020 retry.go:31] will retry after 361.371677ms: waiting for machine to come up
	I1105 19:10:25.064078   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.064484   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.064512   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.064433   75020 retry.go:31] will retry after 442.206433ms: waiting for machine to come up
	I1105 19:10:25.507981   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.508380   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.508405   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.508338   75020 retry.go:31] will retry after 573.453662ms: waiting for machine to come up
	I1105 19:10:26.083299   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.083727   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.083753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.083670   75020 retry.go:31] will retry after 686.210957ms: waiting for machine to come up
	I1105 19:10:26.771637   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.772070   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.772112   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.772062   75020 retry.go:31] will retry after 685.825223ms: waiting for machine to come up
	I1105 19:10:27.459230   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:27.459652   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:27.459677   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:27.459616   75020 retry.go:31] will retry after 1.167971852s: waiting for machine to come up
	I1105 19:10:28.247729   73496 start.go:360] acquireMachinesLock for no-preload-459223: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:10:28.629194   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:28.629526   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:28.629549   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:28.629488   75020 retry.go:31] will retry after 1.180980288s: waiting for machine to come up
	I1105 19:10:29.812048   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:29.812445   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:29.812475   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:29.812390   75020 retry.go:31] will retry after 1.527253183s: waiting for machine to come up
	I1105 19:10:31.342147   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:31.342519   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:31.342546   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:31.342467   75020 retry.go:31] will retry after 1.597485878s: waiting for machine to come up
	I1105 19:10:32.942141   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:32.942459   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:32.942505   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:32.942431   75020 retry.go:31] will retry after 2.416793509s: waiting for machine to come up
	I1105 19:10:35.360354   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:35.360717   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:35.360743   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:35.360674   75020 retry.go:31] will retry after 3.193637492s: waiting for machine to come up
	I1105 19:10:38.556294   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:38.556744   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:38.556775   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:38.556673   75020 retry.go:31] will retry after 3.819760443s: waiting for machine to come up
	I1105 19:10:42.380607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381140   73732 main.go:141] libmachine: (embed-certs-271881) Found IP for machine: 192.168.39.58
	I1105 19:10:42.381172   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has current primary IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381196   73732 main.go:141] libmachine: (embed-certs-271881) Reserving static IP address...
	I1105 19:10:42.381607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.381634   73732 main.go:141] libmachine: (embed-certs-271881) Reserved static IP address: 192.168.39.58
	I1105 19:10:42.381647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | skip adding static IP to network mk-embed-certs-271881 - found existing host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"}
	I1105 19:10:42.381671   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Getting to WaitForSSH function...
	I1105 19:10:42.381686   73732 main.go:141] libmachine: (embed-certs-271881) Waiting for SSH to be available...
	I1105 19:10:42.383908   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384306   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.384333   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384427   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH client type: external
	I1105 19:10:42.384458   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa (-rw-------)
	I1105 19:10:42.384486   73732 main.go:141] libmachine: (embed-certs-271881) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:10:42.384502   73732 main.go:141] libmachine: (embed-certs-271881) DBG | About to run SSH command:
	I1105 19:10:42.384510   73732 main.go:141] libmachine: (embed-certs-271881) DBG | exit 0
	I1105 19:10:42.506807   73732 main.go:141] libmachine: (embed-certs-271881) DBG | SSH cmd err, output: <nil>: 
	I1105 19:10:42.507217   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetConfigRaw
	I1105 19:10:42.507868   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.510314   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.510680   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510936   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/config.json ...
	I1105 19:10:42.511183   73732 machine.go:93] provisionDockerMachine start ...
	I1105 19:10:42.511203   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:42.511426   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.513759   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514111   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.514144   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514290   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.514473   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514654   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514827   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.514979   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.515191   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.515202   73732 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:10:42.619241   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:10:42.619273   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619517   73732 buildroot.go:166] provisioning hostname "embed-certs-271881"
	I1105 19:10:42.619555   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619735   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.622695   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623117   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.623146   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623304   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.623465   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623632   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623825   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.623957   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.624122   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.624135   73732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-271881 && echo "embed-certs-271881" | sudo tee /etc/hostname
	I1105 19:10:42.740722   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-271881
	
	I1105 19:10:42.740749   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.743579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.743922   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.743945   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.744160   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.744343   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744470   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.744756   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.744950   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.744972   73732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-271881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-271881/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-271881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:10:42.854869   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:42.854898   73732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:10:42.854926   73732 buildroot.go:174] setting up certificates
	I1105 19:10:42.854940   73732 provision.go:84] configureAuth start
	I1105 19:10:42.854948   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.855222   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.857913   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858228   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.858252   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858440   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.860753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861041   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.861062   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861222   73732 provision.go:143] copyHostCerts
	I1105 19:10:42.861274   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:10:42.861291   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:10:42.861385   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:10:42.861543   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:10:42.861556   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:10:42.861595   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:10:42.861671   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:10:42.861681   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:10:42.861713   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:10:42.861781   73732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.embed-certs-271881 san=[127.0.0.1 192.168.39.58 embed-certs-271881 localhost minikube]
	I1105 19:10:43.659372   74141 start.go:364] duration metric: took 3m39.006624915s to acquireMachinesLock for "default-k8s-diff-port-608095"
	I1105 19:10:43.659450   74141 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:43.659458   74141 fix.go:54] fixHost starting: 
	I1105 19:10:43.659814   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:43.659874   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:43.677604   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I1105 19:10:43.678132   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:43.678624   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:10:43.678649   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:43.679047   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:43.679250   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:10:43.679407   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:10:43.681036   74141 fix.go:112] recreateIfNeeded on default-k8s-diff-port-608095: state=Stopped err=<nil>
	I1105 19:10:43.681063   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	W1105 19:10:43.681194   74141 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:43.683110   74141 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-608095" ...
	I1105 19:10:43.684451   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Start
	I1105 19:10:43.684639   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring networks are active...
	I1105 19:10:43.685436   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network default is active
	I1105 19:10:43.685983   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network mk-default-k8s-diff-port-608095 is active
	I1105 19:10:43.686398   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Getting domain xml...
	I1105 19:10:43.687143   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Creating domain...
	I1105 19:10:43.044648   73732 provision.go:177] copyRemoteCerts
	I1105 19:10:43.044703   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:10:43.044730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.047120   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047506   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.047538   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047717   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.047886   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.048037   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.048186   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.129098   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:10:43.154632   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1105 19:10:43.179681   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 19:10:43.205598   73732 provision.go:87] duration metric: took 350.648117ms to configureAuth
	I1105 19:10:43.205622   73732 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:10:43.205822   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:10:43.205900   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.208446   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.208763   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.208799   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.209006   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.209190   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209489   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.209611   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.209828   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.209850   73732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:10:43.431540   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:10:43.431569   73732 machine.go:96] duration metric: took 920.370689ms to provisionDockerMachine
	I1105 19:10:43.431582   73732 start.go:293] postStartSetup for "embed-certs-271881" (driver="kvm2")
	I1105 19:10:43.431595   73732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:10:43.431617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.431912   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:10:43.431940   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.434821   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435170   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.435193   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435338   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.435532   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.435714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.435851   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.517391   73732 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:10:43.521532   73732 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:10:43.521553   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:10:43.521632   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:10:43.521721   73732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:10:43.521839   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:10:43.531045   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:43.556596   73732 start.go:296] duration metric: took 125.000692ms for postStartSetup
	I1105 19:10:43.556634   73732 fix.go:56] duration metric: took 20.309059136s for fixHost
	I1105 19:10:43.556663   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.558888   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559181   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.559220   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.559531   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559674   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.559934   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.560096   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.560106   73732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:10:43.659219   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833843.637801657
	
	I1105 19:10:43.659240   73732 fix.go:216] guest clock: 1730833843.637801657
	I1105 19:10:43.659247   73732 fix.go:229] Guest: 2024-11-05 19:10:43.637801657 +0000 UTC Remote: 2024-11-05 19:10:43.556637855 +0000 UTC m=+290.729857868 (delta=81.163802ms)
	I1105 19:10:43.659284   73732 fix.go:200] guest clock delta is within tolerance: 81.163802ms
	I1105 19:10:43.659290   73732 start.go:83] releasing machines lock for "embed-certs-271881", held for 20.411743975s
	I1105 19:10:43.659324   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.659589   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:43.662581   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663025   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.663058   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663214   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663907   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.664017   73732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:10:43.664057   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.664108   73732 ssh_runner.go:195] Run: cat /version.json
	I1105 19:10:43.664131   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.666998   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667059   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667365   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667395   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667424   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667438   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667543   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667638   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667897   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667968   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667996   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.668078   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.775067   73732 ssh_runner.go:195] Run: systemctl --version
	I1105 19:10:43.780892   73732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:10:43.919564   73732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:10:43.926362   73732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:10:43.926422   73732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:10:43.942359   73732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:10:43.942378   73732 start.go:495] detecting cgroup driver to use...
	I1105 19:10:43.942450   73732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:10:43.964650   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:10:43.980651   73732 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:10:43.980717   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:10:43.993988   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:10:44.007440   73732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:10:44.132040   73732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:10:44.314220   73732 docker.go:233] disabling docker service ...
	I1105 19:10:44.314294   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:10:44.337362   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:10:44.351277   73732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:10:44.485105   73732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:10:44.621596   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:10:44.636254   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:10:44.656530   73732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:10:44.656595   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.667156   73732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:10:44.667237   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.682233   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.692814   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.704688   73732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:10:44.721662   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.738629   73732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.754944   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.765089   73732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:10:44.774147   73732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:10:44.774210   73732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:10:44.786312   73732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:10:44.795892   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:44.926823   73732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:10:45.022945   73732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:10:45.023042   73732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:10:45.027389   73732 start.go:563] Will wait 60s for crictl version
	I1105 19:10:45.027451   73732 ssh_runner.go:195] Run: which crictl
	I1105 19:10:45.030701   73732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:10:45.067294   73732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:10:45.067410   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.094394   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.123459   73732 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:10:45.124645   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:45.127396   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.127794   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:45.127833   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.128104   73732 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 19:10:45.131923   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:45.143951   73732 kubeadm.go:883] updating cluster {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:10:45.144078   73732 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:10:45.144125   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:45.177770   73732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:10:45.177830   73732 ssh_runner.go:195] Run: which lz4
	I1105 19:10:45.181571   73732 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:10:45.186569   73732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:10:45.186602   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:10:46.442865   73732 crio.go:462] duration metric: took 1.26132812s to copy over tarball
	I1105 19:10:46.442959   73732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:10:44.962206   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting to get IP...
	I1105 19:10:44.963032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963397   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963492   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:44.963380   75165 retry.go:31] will retry after 274.297859ms: waiting for machine to come up
	I1105 19:10:45.239024   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239453   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.239406   75165 retry.go:31] will retry after 239.892312ms: waiting for machine to come up
	I1105 19:10:45.481036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481584   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.481569   75165 retry.go:31] will retry after 360.538082ms: waiting for machine to come up
	I1105 19:10:45.844144   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844565   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844596   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.844533   75165 retry.go:31] will retry after 387.597088ms: waiting for machine to come up
	I1105 19:10:46.234241   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234798   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.234738   75165 retry.go:31] will retry after 597.596298ms: waiting for machine to come up
	I1105 19:10:46.833721   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834170   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.834142   75165 retry.go:31] will retry after 688.240413ms: waiting for machine to come up
	I1105 19:10:47.523898   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524412   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524442   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:47.524377   75165 retry.go:31] will retry after 826.38207ms: waiting for machine to come up
	I1105 19:10:48.352258   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352787   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352809   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:48.352681   75165 retry.go:31] will retry after 1.381579847s: waiting for machine to come up
	I1105 19:10:48.547186   73732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104175993s)
	I1105 19:10:48.547221   73732 crio.go:469] duration metric: took 2.104326973s to extract the tarball
	I1105 19:10:48.547231   73732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:10:48.583027   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:48.630180   73732 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:10:48.630208   73732 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:10:48.630218   73732 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.31.2 crio true true} ...
	I1105 19:10:48.630349   73732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-271881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:10:48.630412   73732 ssh_runner.go:195] Run: crio config
	I1105 19:10:48.682182   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:48.682204   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:48.682213   73732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:10:48.682232   73732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-271881 NodeName:embed-certs-271881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:10:48.682354   73732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-271881"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:10:48.682412   73732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:10:48.691968   73732 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:10:48.692031   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:10:48.700980   73732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:10:48.716797   73732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:10:48.732408   73732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1105 19:10:48.748354   73732 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1105 19:10:48.751791   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:48.763068   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:48.893747   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:10:48.910247   73732 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881 for IP: 192.168.39.58
	I1105 19:10:48.910270   73732 certs.go:194] generating shared ca certs ...
	I1105 19:10:48.910303   73732 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:10:48.910488   73732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:10:48.910547   73732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:10:48.910561   73732 certs.go:256] generating profile certs ...
	I1105 19:10:48.910673   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/client.key
	I1105 19:10:48.910768   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key.0a454894
	I1105 19:10:48.910837   73732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key
	I1105 19:10:48.911021   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:10:48.911059   73732 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:10:48.911071   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:10:48.911116   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:10:48.911160   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:10:48.911196   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:10:48.911265   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:48.912104   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:10:48.969066   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:10:49.000713   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:10:49.040367   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:10:49.068456   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1105 19:10:49.094166   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:10:49.115986   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:10:49.137770   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:10:49.161140   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:10:49.182996   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:10:49.206578   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:10:49.230006   73732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:10:49.245835   73732 ssh_runner.go:195] Run: openssl version
	I1105 19:10:49.251252   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:10:49.261237   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265318   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265398   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.270753   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:10:49.280568   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:10:49.290580   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294567   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294644   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.299812   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:10:49.309398   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:10:49.319451   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323490   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323543   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.328708   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:10:49.338805   73732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:10:49.342918   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:10:49.348526   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:10:49.353943   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:10:49.359527   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:10:49.364886   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:10:49.370119   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:10:49.375437   73732 kubeadm.go:392] StartCluster: {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:10:49.375531   73732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:10:49.375572   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.415844   73732 cri.go:89] found id: ""
	I1105 19:10:49.415916   73732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:10:49.425336   73732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:10:49.425402   73732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:10:49.425474   73732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:10:49.434717   73732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:10:49.435831   73732 kubeconfig.go:125] found "embed-certs-271881" server: "https://192.168.39.58:8443"
	I1105 19:10:49.437903   73732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:10:49.446625   73732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I1105 19:10:49.446657   73732 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:10:49.446668   73732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:10:49.446732   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.479546   73732 cri.go:89] found id: ""
	I1105 19:10:49.479639   73732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:10:49.499034   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:10:49.510134   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:10:49.510159   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:10:49.510203   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:10:49.520482   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:10:49.520544   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:10:49.530750   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:10:49.539113   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:10:49.539183   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:10:49.548104   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.556754   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:10:49.556811   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.565606   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:10:49.574023   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:10:49.574091   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:10:49.582888   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:10:49.591876   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:49.688517   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.070191   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.38163928s)
	I1105 19:10:51.070240   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.267774   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.329051   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.406120   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:10:51.406226   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:51.907080   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:52.406468   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:49.735558   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735923   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735987   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:49.735914   75165 retry.go:31] will retry after 1.132319443s: waiting for machine to come up
	I1105 19:10:50.870267   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870770   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870801   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:50.870715   75165 retry.go:31] will retry after 1.791598796s: waiting for machine to come up
	I1105 19:10:52.664538   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665055   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:52.664912   75165 retry.go:31] will retry after 1.910294965s: waiting for machine to come up
	I1105 19:10:52.907103   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.407319   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.421763   73732 api_server.go:72] duration metric: took 2.015640262s to wait for apiserver process to appear ...
	I1105 19:10:53.421794   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:10:53.421816   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.752768   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.752803   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.752819   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.772365   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.772412   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.922705   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.928293   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:55.928329   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.422875   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.430633   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.430667   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.922156   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.934958   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.935016   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:57.422646   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:57.428784   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:10:57.435298   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:10:57.435319   73732 api_server.go:131] duration metric: took 4.013519207s to wait for apiserver health ...
	I1105 19:10:57.435327   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:57.435333   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:57.437061   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:10:57.438374   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:10:57.448509   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:10:57.465994   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:10:57.474649   73732 system_pods.go:59] 8 kube-system pods found
	I1105 19:10:57.474682   73732 system_pods.go:61] "coredns-7c65d6cfc9-nwzpq" [be8aa054-3f68-4c19-bae3-9d9cfcb51869] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:10:57.474691   73732 system_pods.go:61] "etcd-embed-certs-271881" [c37c829b-1dca-4659-b24c-4559304d9fe0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:10:57.474703   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [6df78e2a-1360-4c4b-b451-c96aa60f24ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:10:57.474710   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [95a6baca-c246-4043-acbc-235b076a89b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:10:57.474723   73732 system_pods.go:61] "kube-proxy-f945s" [2cb835f0-3727-4dd1-bd21-a21554ffdc0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 19:10:57.474730   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [53e044c5-199c-46f4-b3db-d3b65a8203aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:10:57.474741   73732 system_pods.go:61] "metrics-server-6867b74b74-vw2sm" [403d0c5f-d870-4f89-8caa-f5e9c8bf9ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:10:57.474748   73732 system_pods.go:61] "storage-provisioner" [13a89bf9-fb97-413a-9948-1c69780784cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 19:10:57.474758   73732 system_pods.go:74] duration metric: took 8.737357ms to wait for pod list to return data ...
	I1105 19:10:57.474769   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:10:57.480599   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:10:57.480623   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:10:57.480634   73732 node_conditions.go:105] duration metric: took 5.857622ms to run NodePressure ...
	I1105 19:10:57.480651   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:54.577390   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577939   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577969   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:54.577885   75165 retry.go:31] will retry after 3.393120773s: waiting for machine to come up
	I1105 19:10:57.971960   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972441   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:57.972370   75165 retry.go:31] will retry after 4.425954537s: waiting for machine to come up
	I1105 19:10:57.896717   73732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902115   73732 kubeadm.go:739] kubelet initialised
	I1105 19:10:57.902138   73732 kubeadm.go:740] duration metric: took 5.39576ms waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902152   73732 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:10:57.907293   73732 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:10:59.913946   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:02.414802   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:03.663928   74485 start.go:364] duration metric: took 3m10.909065205s to acquireMachinesLock for "old-k8s-version-567666"
	I1105 19:11:03.664023   74485 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:03.664038   74485 fix.go:54] fixHost starting: 
	I1105 19:11:03.664514   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:03.664569   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:03.682846   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I1105 19:11:03.683341   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:03.683786   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:11:03.683812   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:03.684219   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:03.684407   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:03.684552   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetState
	I1105 19:11:03.686262   74485 fix.go:112] recreateIfNeeded on old-k8s-version-567666: state=Stopped err=<nil>
	I1105 19:11:03.686295   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	W1105 19:11:03.686440   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:03.688047   74485 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-567666" ...
	I1105 19:11:02.401454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.401980   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Found IP for machine: 192.168.50.10
	I1105 19:11:02.402015   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has current primary IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.402025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserving static IP address...
	I1105 19:11:02.402384   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.402413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserved static IP address: 192.168.50.10
	I1105 19:11:02.402432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | skip adding static IP to network mk-default-k8s-diff-port-608095 - found existing host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"}
	I1105 19:11:02.402445   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for SSH to be available...
	I1105 19:11:02.402461   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Getting to WaitForSSH function...
	I1105 19:11:02.404454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404751   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.404778   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404915   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH client type: external
	I1105 19:11:02.404964   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa (-rw-------)
	I1105 19:11:02.405032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:02.405059   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | About to run SSH command:
	I1105 19:11:02.405072   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | exit 0
	I1105 19:11:02.526769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:02.527147   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetConfigRaw
	I1105 19:11:02.527756   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.530014   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530325   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.530357   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530527   74141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/config.json ...
	I1105 19:11:02.530708   74141 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:02.530728   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:02.530921   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.532868   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533184   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.533215   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533334   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.533493   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533630   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533761   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.533930   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.534116   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.534128   74141 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:02.631085   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:02.631114   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631351   74141 buildroot.go:166] provisioning hostname "default-k8s-diff-port-608095"
	I1105 19:11:02.631376   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631540   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.634037   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634371   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.634400   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634517   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.634691   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634849   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634995   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.635136   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.635310   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.635326   74141 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-608095 && echo "default-k8s-diff-port-608095" | sudo tee /etc/hostname
	I1105 19:11:02.744298   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-608095
	
	I1105 19:11:02.744327   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.747036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747348   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.747379   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747555   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.747716   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747846   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747940   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.748061   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.748266   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.748284   74141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-608095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-608095/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-608095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:02.850828   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:02.850854   74141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:02.850906   74141 buildroot.go:174] setting up certificates
	I1105 19:11:02.850923   74141 provision.go:84] configureAuth start
	I1105 19:11:02.850935   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.851260   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.853803   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854062   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.854088   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854203   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.856341   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856629   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.856659   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856747   74141 provision.go:143] copyHostCerts
	I1105 19:11:02.856804   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:02.856823   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:02.856874   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:02.856987   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:02.856997   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:02.857017   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:02.857075   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:02.857082   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:02.857100   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:02.857148   74141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-608095 san=[127.0.0.1 192.168.50.10 default-k8s-diff-port-608095 localhost minikube]
	I1105 19:11:03.048307   74141 provision.go:177] copyRemoteCerts
	I1105 19:11:03.048362   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:03.048386   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.050951   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051303   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.051353   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051556   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.051785   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.051953   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.052084   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.128441   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:03.150680   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1105 19:11:03.172480   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:03.194311   74141 provision.go:87] duration metric: took 343.374586ms to configureAuth
	I1105 19:11:03.194338   74141 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:03.194499   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:03.194560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.197209   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197585   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.197603   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197822   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.198006   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198168   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198336   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.198503   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.198686   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.198706   74141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:03.429895   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:03.429926   74141 machine.go:96] duration metric: took 899.201597ms to provisionDockerMachine
	I1105 19:11:03.429941   74141 start.go:293] postStartSetup for "default-k8s-diff-port-608095" (driver="kvm2")
	I1105 19:11:03.429955   74141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:03.429976   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.430329   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:03.430364   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.433455   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.433791   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.433820   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.434009   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.434323   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.434500   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.434659   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.514652   74141 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:03.518678   74141 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:03.518711   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:03.518774   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:03.518877   74141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:03.519014   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:03.528972   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:03.555892   74141 start.go:296] duration metric: took 125.936355ms for postStartSetup
	I1105 19:11:03.555939   74141 fix.go:56] duration metric: took 19.896481237s for fixHost
	I1105 19:11:03.555966   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.558764   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559153   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.559183   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559402   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.559610   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559788   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559933   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.560116   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.560292   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.560303   74141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:03.663723   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833863.637227261
	
	I1105 19:11:03.663751   74141 fix.go:216] guest clock: 1730833863.637227261
	I1105 19:11:03.663766   74141 fix.go:229] Guest: 2024-11-05 19:11:03.637227261 +0000 UTC Remote: 2024-11-05 19:11:03.555945261 +0000 UTC m=+239.048686257 (delta=81.282ms)
	I1105 19:11:03.663815   74141 fix.go:200] guest clock delta is within tolerance: 81.282ms
	I1105 19:11:03.663822   74141 start.go:83] releasing machines lock for "default-k8s-diff-port-608095", held for 20.004399519s
	I1105 19:11:03.663858   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.664158   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:03.666922   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667372   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.667408   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668101   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668297   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668412   74141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:03.668478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.668748   74141 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:03.668774   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.671463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671781   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.671810   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671903   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672175   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672333   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.672369   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.672417   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672578   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.672598   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672779   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.673106   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.777585   74141 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:03.783343   74141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:03.927951   74141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:03.933308   74141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:03.933380   74141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:03.948472   74141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:03.948499   74141 start.go:495] detecting cgroup driver to use...
	I1105 19:11:03.948572   74141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:03.963929   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:03.978578   74141 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:03.978643   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:03.992096   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:04.006036   74141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:04.114061   74141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:04.274136   74141 docker.go:233] disabling docker service ...
	I1105 19:11:04.274220   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:04.287806   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:04.300294   74141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:04.429899   74141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:04.576075   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:04.590934   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:04.611299   74141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:04.611375   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.623876   74141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:04.623949   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.634333   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.644768   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.654549   74141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:04.665001   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.675464   74141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.693845   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.703982   74141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:04.713758   74141 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:04.713820   74141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:04.727618   74141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:04.737679   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:04.866928   74141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:04.966529   74141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:04.966599   74141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:04.971536   74141 start.go:563] Will wait 60s for crictl version
	I1105 19:11:04.971602   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:11:04.975344   74141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:05.015910   74141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:05.015987   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.043577   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.072767   74141 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:03.689374   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .Start
	I1105 19:11:03.689560   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring networks are active...
	I1105 19:11:03.690290   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network default is active
	I1105 19:11:03.690659   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network mk-old-k8s-version-567666 is active
	I1105 19:11:03.691130   74485 main.go:141] libmachine: (old-k8s-version-567666) Getting domain xml...
	I1105 19:11:03.691890   74485 main.go:141] libmachine: (old-k8s-version-567666) Creating domain...
	I1105 19:11:05.006949   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting to get IP...
	I1105 19:11:05.008062   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.008547   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.008605   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.008523   75309 retry.go:31] will retry after 290.124771ms: waiting for machine to come up
	I1105 19:11:05.300185   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.300768   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.300803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.300717   75309 retry.go:31] will retry after 292.829683ms: waiting for machine to come up
	I1105 19:11:05.595365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.595881   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.595907   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.595831   75309 retry.go:31] will retry after 447.168257ms: waiting for machine to come up
	I1105 19:11:06.045320   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.045946   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.045976   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.045893   75309 retry.go:31] will retry after 420.272812ms: waiting for machine to come up
	I1105 19:11:06.467556   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.468012   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.468039   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.467962   75309 retry.go:31] will retry after 657.733497ms: waiting for machine to come up
	I1105 19:11:07.128022   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:07.128531   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:07.128559   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:07.128484   75309 retry.go:31] will retry after 922.664226ms: waiting for machine to come up
	I1105 19:11:04.416533   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:06.915445   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:07.417579   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:07.417610   73732 pod_ready.go:82] duration metric: took 9.510292246s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:07.417620   73732 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:05.073913   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:05.077086   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077430   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:05.077468   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077691   74141 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:05.081724   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:05.093668   74141 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:05.093785   74141 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:05.093853   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:05.128693   74141 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:05.128753   74141 ssh_runner.go:195] Run: which lz4
	I1105 19:11:05.133116   74141 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:05.137101   74141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:05.137126   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:11:06.379012   74141 crio.go:462] duration metric: took 1.245926141s to copy over tarball
	I1105 19:11:06.379088   74141 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:08.545369   74141 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.166238549s)
	I1105 19:11:08.545405   74141 crio.go:469] duration metric: took 2.166364449s to extract the tarball
	I1105 19:11:08.545422   74141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:08.581651   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:08.628768   74141 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:11:08.628795   74141 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:11:08.628805   74141 kubeadm.go:934] updating node { 192.168.50.10 8444 v1.31.2 crio true true} ...
	I1105 19:11:08.628937   74141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-608095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:08.629056   74141 ssh_runner.go:195] Run: crio config
	I1105 19:11:08.690112   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:08.690140   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:08.690152   74141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:08.690184   74141 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-608095 NodeName:default-k8s-diff-port-608095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:08.690346   74141 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-608095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:08.690415   74141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:08.700222   74141 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:08.700294   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:08.709542   74141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1105 19:11:08.725723   74141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:08.741985   74141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1105 19:11:08.758655   74141 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:08.762296   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:08.774119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:08.910000   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:08.926765   74141 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095 for IP: 192.168.50.10
	I1105 19:11:08.926788   74141 certs.go:194] generating shared ca certs ...
	I1105 19:11:08.926806   74141 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:08.927006   74141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:08.927069   74141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:08.927080   74141 certs.go:256] generating profile certs ...
	I1105 19:11:08.927157   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/client.key
	I1105 19:11:08.927229   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key.f2b96156
	I1105 19:11:08.927281   74141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key
	I1105 19:11:08.927456   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:08.927506   74141 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:08.927516   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:08.927549   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:08.927585   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:08.927620   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:08.927682   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:08.928417   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:08.971359   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:09.011632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:09.049748   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:09.078632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 19:11:09.105786   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:09.127855   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:09.151461   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:11:09.174068   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:09.196733   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:09.219111   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:09.241335   74141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:09.257040   74141 ssh_runner.go:195] Run: openssl version
	I1105 19:11:09.262371   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:09.272232   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276300   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276362   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.281747   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:09.291864   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:09.302012   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306085   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306142   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.311374   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:09.321334   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:09.331208   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335401   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335451   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.340595   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:09.350430   74141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:09.354622   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:09.360165   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:09.365624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:09.371545   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:09.377226   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:09.382624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:09.387929   74141 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:09.388032   74141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:09.388076   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.429707   74141 cri.go:89] found id: ""
	I1105 19:11:09.429783   74141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:09.440455   74141 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:09.440476   74141 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:09.440527   74141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:09.451745   74141 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:09.452609   74141 kubeconfig.go:125] found "default-k8s-diff-port-608095" server: "https://192.168.50.10:8444"
	I1105 19:11:09.454539   74141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:09.463900   74141 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.10
	I1105 19:11:09.463926   74141 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:09.463936   74141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:09.463987   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.497583   74141 cri.go:89] found id: ""
	I1105 19:11:09.497656   74141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:09.513767   74141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:09.523219   74141 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:09.523237   74141 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:09.523284   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1105 19:11:09.533116   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:09.533181   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:09.542453   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1105 19:11:08.053120   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:08.053610   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:08.053636   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:08.053587   75309 retry.go:31] will retry after 947.415519ms: waiting for machine to come up
	I1105 19:11:09.002803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:09.003423   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:09.003452   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:09.003363   75309 retry.go:31] will retry after 1.07978111s: waiting for machine to come up
	I1105 19:11:10.084404   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:10.084808   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:10.084830   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:10.084784   75309 retry.go:31] will retry after 1.482510322s: waiting for machine to come up
	I1105 19:11:11.568421   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:11.568840   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:11.568869   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:11.568791   75309 retry.go:31] will retry after 1.630983434s: waiting for machine to come up
	I1105 19:11:08.426308   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.426337   73732 pod_ready.go:82] duration metric: took 1.008708779s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.426350   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432238   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.432264   73732 pod_ready.go:82] duration metric: took 5.905051ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432276   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438187   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.438214   73732 pod_ready.go:82] duration metric: took 5.9294ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438226   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443794   73732 pod_ready.go:93] pod "kube-proxy-f945s" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.443823   73732 pod_ready.go:82] duration metric: took 5.587862ms for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443835   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:10.449498   73732 pod_ready.go:103] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:12.454934   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:12.454965   73732 pod_ready.go:82] duration metric: took 4.011121022s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:12.455003   73732 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:09.551174   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:09.551235   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:09.560481   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.571928   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:09.571997   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.583935   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1105 19:11:09.595336   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:09.595401   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:09.605061   74141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:09.613920   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:09.718759   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.680100   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.901034   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.951868   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.997866   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:10.997956   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.498113   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.998192   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.498517   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.998919   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:13.013078   74141 api_server.go:72] duration metric: took 2.01520799s to wait for apiserver process to appear ...
	I1105 19:11:13.013106   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:11:13.013136   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.042333   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.042388   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.042404   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.085574   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.085602   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.513733   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.518755   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:16.518789   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.013278   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.019214   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:17.019236   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.513886   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.519036   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:11:17.528970   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:11:17.529000   74141 api_server.go:131] duration metric: took 4.515887773s to wait for apiserver health ...
	I1105 19:11:17.529009   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:17.529016   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:17.530429   74141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:11:13.201891   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:13.202425   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:13.202453   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:13.202387   75309 retry.go:31] will retry after 2.689744765s: waiting for machine to come up
	I1105 19:11:15.893632   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:15.893989   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:15.894034   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:15.893964   75309 retry.go:31] will retry after 2.460566804s: waiting for machine to come up
	I1105 19:11:14.465748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:16.961287   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:17.531600   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:11:17.544876   74141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:11:17.567835   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:11:17.583925   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:11:17.583976   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:11:17.583988   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:11:17.583999   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:11:17.584015   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:11:17.584027   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:11:17.584041   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:11:17.584052   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:11:17.584060   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:11:17.584068   74141 system_pods.go:74] duration metric: took 16.206948ms to wait for pod list to return data ...
	I1105 19:11:17.584081   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:11:17.593935   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:11:17.593960   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:11:17.593971   74141 node_conditions.go:105] duration metric: took 9.883295ms to run NodePressure ...
	I1105 19:11:17.593988   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:17.929181   74141 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933853   74141 kubeadm.go:739] kubelet initialised
	I1105 19:11:17.933879   74141 kubeadm.go:740] duration metric: took 4.667992ms waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933888   74141 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:17.940560   74141 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.952799   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952832   74141 pod_ready.go:82] duration metric: took 12.240861ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.952845   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952856   74141 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.959079   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959105   74141 pod_ready.go:82] duration metric: took 6.23649ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.959119   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959130   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.963797   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963817   74141 pod_ready.go:82] duration metric: took 4.681011ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.963830   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963837   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.970915   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970935   74141 pod_ready.go:82] duration metric: took 7.091116ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.970945   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970951   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.371478   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371503   74141 pod_ready.go:82] duration metric: took 400.5454ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.371512   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371519   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.771731   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771768   74141 pod_ready.go:82] duration metric: took 400.239012ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.771783   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771792   74141 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:19.171239   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171271   74141 pod_ready.go:82] duration metric: took 399.46983ms for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:19.171286   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171296   74141 pod_ready.go:39] duration metric: took 1.237397637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:19.171315   74141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:11:19.185845   74141 ops.go:34] apiserver oom_adj: -16
	I1105 19:11:19.185869   74141 kubeadm.go:597] duration metric: took 9.745385943s to restartPrimaryControlPlane
	I1105 19:11:19.185880   74141 kubeadm.go:394] duration metric: took 9.797958845s to StartCluster
	I1105 19:11:19.185901   74141 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.185989   74141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:19.187722   74141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.187971   74141 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:11:19.188036   74141 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:11:19.188142   74141 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188160   74141 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-608095"
	I1105 19:11:19.188159   74141 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-608095"
	W1105 19:11:19.188171   74141 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:11:19.188199   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188236   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:19.188248   74141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-608095"
	I1105 19:11:19.188273   74141 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188310   74141 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.188323   74141 addons.go:243] addon metrics-server should already be in state true
	I1105 19:11:19.188379   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188526   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188569   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188674   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188725   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188802   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188823   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.189792   74141 out.go:177] * Verifying Kubernetes components...
	I1105 19:11:19.191119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:19.203875   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I1105 19:11:19.204313   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.204803   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.204830   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.205083   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I1105 19:11:19.205175   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.205432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.205488   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.205973   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.205999   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.206357   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.206916   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.206955   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.207292   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I1105 19:11:19.207671   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.208122   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.208146   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.208484   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.208861   74141 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.208882   74141 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:11:19.208909   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.209004   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209045   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.209234   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209273   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.223963   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I1105 19:11:19.224405   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.225044   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.225074   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.225460   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.226141   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I1105 19:11:19.226463   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.226509   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.226577   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.226757   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I1105 19:11:19.227058   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.227081   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.227475   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.227558   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.227797   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.228116   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.228136   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.228530   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.228755   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.229870   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.230471   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.232239   74141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:19.232263   74141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:11:19.233508   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:11:19.233527   74141 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:11:19.233548   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.233607   74141 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.233626   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:11:19.233647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.237337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237365   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237895   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237928   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237958   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237972   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.238155   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238270   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238440   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238623   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238681   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.239040   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.243685   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1105 19:11:19.244073   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.244584   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.244602   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.244951   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.245112   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.246617   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.246814   74141 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.246830   74141 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:11:19.246845   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.249467   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.249896   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.249925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.250139   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.250317   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.250466   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.250636   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.396917   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:19.412224   74141 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:19.541493   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.566934   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:11:19.566982   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:11:19.567627   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.607685   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:11:19.607717   74141 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:11:19.640921   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:19.640959   74141 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:11:19.674550   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:20.091222   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091248   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091528   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091583   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091596   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091605   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091807   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091868   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091853   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.105073   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.105093   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.105426   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.105442   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719139   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.151476995s)
	I1105 19:11:20.719187   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719194   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.044605505s)
	I1105 19:11:20.719236   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719256   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719511   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719582   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719593   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719596   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719631   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719580   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719643   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719654   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719670   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719680   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719897   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719946   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719948   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719903   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719982   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719990   74141 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-608095"
	I1105 19:11:20.719927   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.721843   74141 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1105 19:11:22.583507   73496 start.go:364] duration metric: took 54.335724939s to acquireMachinesLock for "no-preload-459223"
	I1105 19:11:22.583581   73496 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:22.583590   73496 fix.go:54] fixHost starting: 
	I1105 19:11:22.584018   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:22.584054   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:22.603921   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I1105 19:11:22.604367   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:22.604825   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:11:22.604845   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:22.605233   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:22.605408   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:22.605534   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:11:22.607289   73496 fix.go:112] recreateIfNeeded on no-preload-459223: state=Stopped err=<nil>
	I1105 19:11:22.607314   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	W1105 19:11:22.607458   73496 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:22.609455   73496 out.go:177] * Restarting existing kvm2 VM for "no-preload-459223" ...
	I1105 19:11:18.357643   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:18.358065   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:18.358099   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:18.358009   75309 retry.go:31] will retry after 3.036834524s: waiting for machine to come up
	I1105 19:11:21.398221   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398763   74485 main.go:141] libmachine: (old-k8s-version-567666) Found IP for machine: 192.168.61.125
	I1105 19:11:21.398825   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has current primary IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398843   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserving static IP address...
	I1105 19:11:21.399327   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.399350   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserved static IP address: 192.168.61.125
	I1105 19:11:21.399365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | skip adding static IP to network mk-old-k8s-version-567666 - found existing host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"}
	I1105 19:11:21.399379   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Getting to WaitForSSH function...
	I1105 19:11:21.399394   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting for SSH to be available...
	I1105 19:11:21.401270   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401664   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.401691   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401866   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH client type: external
	I1105 19:11:21.401897   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa (-rw-------)
	I1105 19:11:21.401935   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:21.401949   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | About to run SSH command:
	I1105 19:11:21.401959   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | exit 0
	I1105 19:11:21.527815   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:21.528165   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:11:21.528874   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.531373   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531647   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.531672   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531876   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:11:21.532071   74485 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:21.532092   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:21.532332   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.534177   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534431   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.534465   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534556   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.534716   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534845   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534960   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.535142   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.535329   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.535341   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:21.643321   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:21.643354   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643618   74485 buildroot.go:166] provisioning hostname "old-k8s-version-567666"
	I1105 19:11:21.643646   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643812   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.646230   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646628   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.646666   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.647037   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647167   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647290   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.647421   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.647579   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.647592   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-567666 && echo "old-k8s-version-567666" | sudo tee /etc/hostname
	I1105 19:11:21.770209   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-567666
	
	I1105 19:11:21.770255   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.772932   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773314   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.773346   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773484   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.773691   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773950   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.774121   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.774357   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.774386   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-567666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-567666/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-567666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:21.890834   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:21.890860   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:21.890915   74485 buildroot.go:174] setting up certificates
	I1105 19:11:21.890929   74485 provision.go:84] configureAuth start
	I1105 19:11:21.890944   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.891224   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.893835   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894256   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.894285   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.896436   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896699   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.896715   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896893   74485 provision.go:143] copyHostCerts
	I1105 19:11:21.896951   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:21.896967   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:21.897037   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:21.897163   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:21.897176   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:21.897205   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:21.897279   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:21.897289   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:21.897315   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:21.897396   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-567666 san=[127.0.0.1 192.168.61.125 localhost minikube old-k8s-version-567666]
	I1105 19:11:21.962153   74485 provision.go:177] copyRemoteCerts
	I1105 19:11:21.962219   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:21.962257   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.964765   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965125   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.965166   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965330   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.965478   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.965603   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.965746   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.048519   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:22.072975   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1105 19:11:22.098263   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:22.120258   74485 provision.go:87] duration metric: took 229.316972ms to configureAuth
	I1105 19:11:22.120285   74485 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:22.120444   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:11:22.120516   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.123859   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124309   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.124344   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124536   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.124737   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.124922   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.125055   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.125213   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.125375   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.125388   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:22.349922   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:22.349964   74485 machine.go:96] duration metric: took 817.87332ms to provisionDockerMachine
	I1105 19:11:22.349979   74485 start.go:293] postStartSetup for "old-k8s-version-567666" (driver="kvm2")
	I1105 19:11:22.349992   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:22.350014   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.350350   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:22.350385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.352922   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353310   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.353332   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353459   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.353638   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.353807   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.353921   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.437482   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:22.441617   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:22.441646   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:22.441711   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:22.441807   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:22.441929   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:22.451016   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:22.474199   74485 start.go:296] duration metric: took 124.207336ms for postStartSetup
	I1105 19:11:22.474233   74485 fix.go:56] duration metric: took 18.810197154s for fixHost
	I1105 19:11:22.474269   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.476786   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477119   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.477157   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477279   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.477471   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477621   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477753   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.477910   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.478070   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.478081   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:22.583343   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833882.558222038
	
	I1105 19:11:22.583363   74485 fix.go:216] guest clock: 1730833882.558222038
	I1105 19:11:22.583372   74485 fix.go:229] Guest: 2024-11-05 19:11:22.558222038 +0000 UTC Remote: 2024-11-05 19:11:22.474236871 +0000 UTC m=+209.862783450 (delta=83.985167ms)
	I1105 19:11:22.583418   74485 fix.go:200] guest clock delta is within tolerance: 83.985167ms
	I1105 19:11:22.583429   74485 start.go:83] releasing machines lock for "old-k8s-version-567666", held for 18.919444623s
	I1105 19:11:22.583460   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.583717   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:22.586183   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586479   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.586509   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586687   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587137   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587310   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587400   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:22.587448   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.587521   74485 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:22.587548   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.590145   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590474   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.590507   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590530   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590655   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.590831   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.590995   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.591010   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591037   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.591179   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.591286   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.591438   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.591558   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591702   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:19.461723   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:21.962582   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:22.702707   74485 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:22.708965   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:22.856764   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:22.863791   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:22.863866   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:22.883997   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:22.884022   74485 start.go:495] detecting cgroup driver to use...
	I1105 19:11:22.884094   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:22.901499   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:22.919358   74485 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:22.919422   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:22.936964   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:22.953538   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:23.077720   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:23.218316   74485 docker.go:233] disabling docker service ...
	I1105 19:11:23.218390   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:23.238316   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:23.251814   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:23.427386   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:23.552928   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:23.567149   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:23.587241   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1105 19:11:23.587307   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.597558   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:23.597620   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.607466   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.616794   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.626425   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:23.637121   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:23.649243   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:23.649305   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:23.664648   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:23.675060   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:23.812636   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:23.903326   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:23.903404   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:23.908377   74485 start.go:563] Will wait 60s for crictl version
	I1105 19:11:23.908434   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:23.912163   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:23.961712   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:23.961794   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:23.992951   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:24.032041   74485 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1105 19:11:20.723316   74141 addons.go:510] duration metric: took 1.53528546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1105 19:11:21.416385   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:23.416458   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:22.610737   73496 main.go:141] libmachine: (no-preload-459223) Calling .Start
	I1105 19:11:22.610910   73496 main.go:141] libmachine: (no-preload-459223) Ensuring networks are active...
	I1105 19:11:22.611680   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network default is active
	I1105 19:11:22.612057   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network mk-no-preload-459223 is active
	I1105 19:11:22.612426   73496 main.go:141] libmachine: (no-preload-459223) Getting domain xml...
	I1105 19:11:22.613081   73496 main.go:141] libmachine: (no-preload-459223) Creating domain...
	I1105 19:11:24.013821   73496 main.go:141] libmachine: (no-preload-459223) Waiting to get IP...
	I1105 19:11:24.014922   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.015467   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.015561   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.015439   75501 retry.go:31] will retry after 233.461829ms: waiting for machine to come up
	I1105 19:11:24.251339   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.252673   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.252799   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.252760   75501 retry.go:31] will retry after 276.401207ms: waiting for machine to come up
	I1105 19:11:24.531408   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.531964   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.531987   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.531909   75501 retry.go:31] will retry after 367.69826ms: waiting for machine to come up
	I1105 19:11:24.901179   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.901579   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.901608   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.901536   75501 retry.go:31] will retry after 602.654501ms: waiting for machine to come up
	I1105 19:11:25.505889   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:25.506403   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:25.506426   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:25.506364   75501 retry.go:31] will retry after 492.077165ms: waiting for machine to come up
	I1105 19:11:24.033400   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:24.036549   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037128   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:24.037165   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037346   74485 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:24.042641   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:24.055174   74485 kubeadm.go:883] updating cluster {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:24.055327   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:11:24.055388   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:24.101655   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:24.101724   74485 ssh_runner.go:195] Run: which lz4
	I1105 19:11:24.105618   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:24.109705   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:24.109735   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1105 19:11:25.602158   74485 crio.go:462] duration metric: took 1.496564307s to copy over tarball
	I1105 19:11:25.602236   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:23.963218   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:26.461963   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:25.419351   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:26.916693   74141 node_ready.go:49] node "default-k8s-diff-port-608095" has status "Ready":"True"
	I1105 19:11:26.916731   74141 node_ready.go:38] duration metric: took 7.50447744s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:26.916744   74141 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:26.922179   74141 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927845   74141 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.927879   74141 pod_ready.go:82] duration metric: took 5.666725ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927892   74141 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932723   74141 pod_ready.go:93] pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.932752   74141 pod_ready.go:82] duration metric: took 4.843531ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932761   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937108   74141 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.937137   74141 pod_ready.go:82] duration metric: took 4.368536ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937152   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.941970   74141 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.941995   74141 pod_ready.go:82] duration metric: took 4.833418ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.942008   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317480   74141 pod_ready.go:93] pod "kube-proxy-8v42c" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.317505   74141 pod_ready.go:82] duration metric: took 375.489077ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317517   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717923   74141 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.717945   74141 pod_ready.go:82] duration metric: took 400.42059ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717956   74141 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.000041   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.000558   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.000613   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.000525   75501 retry.go:31] will retry after 920.198126ms: waiting for machine to come up
	I1105 19:11:26.922134   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.922917   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.922951   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.922858   75501 retry.go:31] will retry after 1.071853506s: waiting for machine to come up
	I1105 19:11:27.996574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:27.996995   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:27.997020   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:27.996949   75501 retry.go:31] will retry after 1.283200825s: waiting for machine to come up
	I1105 19:11:29.282457   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:29.282942   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:29.282979   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:29.282903   75501 retry.go:31] will retry after 1.512809658s: waiting for machine to come up
	I1105 19:11:28.701223   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.098952901s)
	I1105 19:11:28.701253   74485 crio.go:469] duration metric: took 3.099065633s to extract the tarball
	I1105 19:11:28.701263   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:28.744214   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:28.778845   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:28.778868   74485 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:28.778962   74485 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:28.778945   74485 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.779024   74485 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.779039   74485 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.778939   74485 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.779067   74485 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.779083   74485 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.778957   74485 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781024   74485 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781003   74485 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.781052   74485 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.781002   74485 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.781088   74485 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.781114   74485 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.013637   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1105 19:11:29.043928   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.043936   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.044140   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.045892   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.046313   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.055792   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.081724   74485 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1105 19:11:29.081779   74485 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1105 19:11:29.081826   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.234925   74485 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1105 19:11:29.234966   74485 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.235046   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235079   74485 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1105 19:11:29.235112   74485 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.235136   74485 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1105 19:11:29.235152   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235167   74485 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.235200   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235238   74485 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1105 19:11:29.235277   74485 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.235298   74485 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1105 19:11:29.235320   74485 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.235333   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235352   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235351   74485 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1105 19:11:29.235385   74485 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.235415   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235426   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.251873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.251960   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.251985   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.252000   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.371298   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.415548   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.415592   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.415654   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.415710   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.415791   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.415868   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.466873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.544593   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.544660   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.586695   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.586714   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.586812   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.586916   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.606582   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1105 19:11:29.707767   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1105 19:11:29.707803   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1105 19:11:29.716195   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1105 19:11:29.723097   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1105 19:11:30.039971   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:30.182760   74485 cache_images.go:92] duration metric: took 1.403874987s to LoadCachedImages
	W1105 19:11:30.182890   74485 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1105 19:11:30.182912   74485 kubeadm.go:934] updating node { 192.168.61.125 8443 v1.20.0 crio true true} ...
	I1105 19:11:30.183052   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-567666 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:30.183146   74485 ssh_runner.go:195] Run: crio config
	I1105 19:11:30.235206   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:11:30.235241   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:30.235253   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:30.235277   74485 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-567666 NodeName:old-k8s-version-567666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1105 19:11:30.235433   74485 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-567666"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:30.235503   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1105 19:11:30.245189   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:30.245263   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:30.254772   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1105 19:11:30.271711   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:30.288568   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1105 19:11:30.309098   74485 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:30.313211   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:30.325637   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:30.447346   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:30.466863   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666 for IP: 192.168.61.125
	I1105 19:11:30.466884   74485 certs.go:194] generating shared ca certs ...
	I1105 19:11:30.466898   74485 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:30.467086   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:30.467152   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:30.467165   74485 certs.go:256] generating profile certs ...
	I1105 19:11:30.467322   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key
	I1105 19:11:30.467398   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8
	I1105 19:11:30.467448   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key
	I1105 19:11:30.467614   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:30.467656   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:30.467676   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:30.467722   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:30.467759   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:30.467788   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:30.467847   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:30.468756   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:30.532325   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:30.559936   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:30.592995   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:30.632421   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 19:11:30.662285   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:11:30.696292   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:30.725642   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:30.750231   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:30.773213   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:30.796269   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:30.820261   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:30.837059   74485 ssh_runner.go:195] Run: openssl version
	I1105 19:11:30.842937   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:30.855033   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859637   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859720   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.865747   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:30.877678   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:30.890762   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895576   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895642   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.901686   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:30.912689   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:30.923800   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928911   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928984   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.934782   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:30.947059   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:30.951934   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:30.958065   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:30.965341   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:30.971725   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:30.977606   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:30.983486   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:30.989212   74485 kubeadm.go:392] StartCluster: {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:30.989350   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:30.989411   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.031794   74485 cri.go:89] found id: ""
	I1105 19:11:31.031884   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:31.043178   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:31.043202   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:31.043291   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:31.054102   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:31.055256   74485 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:31.055924   74485 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-567666" cluster setting kubeconfig missing "old-k8s-version-567666" context setting]
	I1105 19:11:31.056913   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:31.064220   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:31.074582   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.125
	I1105 19:11:31.074618   74485 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:31.074628   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:31.074706   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.111157   74485 cri.go:89] found id: ""
	I1105 19:11:31.111241   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:31.130027   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:31.139917   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:31.139939   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:31.140007   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:31.150790   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:31.150868   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:31.161397   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:31.170394   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:31.170462   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:31.179594   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.188892   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:31.188952   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.199840   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:31.209166   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:31.209244   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:31.219687   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:31.231079   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:31.350667   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.094565   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.334807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.457538   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.534503   74485 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:32.534596   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:28.464017   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.962422   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:29.725325   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:32.225372   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.796963   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:30.797438   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:30.797489   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:30.797407   75501 retry.go:31] will retry after 1.774832047s: waiting for machine to come up
	I1105 19:11:32.574423   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:32.575000   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:32.575047   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:32.574929   75501 retry.go:31] will retry after 2.041093372s: waiting for machine to come up
	I1105 19:11:34.618469   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:34.618954   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:34.619015   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:34.618915   75501 retry.go:31] will retry after 2.731949113s: waiting for machine to come up
	I1105 19:11:33.034690   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:33.535594   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.035526   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.534836   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.034947   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.535108   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.035417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.535438   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.034766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.535415   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:32.962469   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.963093   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.461010   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.724484   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.224511   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.352209   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:37.352752   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:37.352783   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:37.352686   75501 retry.go:31] will retry after 3.62202055s: waiting for machine to come up
	I1105 19:11:38.035553   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:38.534702   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.035332   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.534749   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.034989   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.535354   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.035624   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.534847   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.035293   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.535363   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.465635   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:41.961348   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:40.978791   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979231   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has current primary IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979249   73496 main.go:141] libmachine: (no-preload-459223) Found IP for machine: 192.168.72.101
	I1105 19:11:40.979258   73496 main.go:141] libmachine: (no-preload-459223) Reserving static IP address...
	I1105 19:11:40.979621   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.979650   73496 main.go:141] libmachine: (no-preload-459223) Reserved static IP address: 192.168.72.101
	I1105 19:11:40.979669   73496 main.go:141] libmachine: (no-preload-459223) DBG | skip adding static IP to network mk-no-preload-459223 - found existing host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"}
	I1105 19:11:40.979682   73496 main.go:141] libmachine: (no-preload-459223) Waiting for SSH to be available...
	I1105 19:11:40.979710   73496 main.go:141] libmachine: (no-preload-459223) DBG | Getting to WaitForSSH function...
	I1105 19:11:40.981725   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.982063   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982202   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH client type: external
	I1105 19:11:40.982227   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa (-rw-------)
	I1105 19:11:40.982258   73496 main.go:141] libmachine: (no-preload-459223) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:40.982286   73496 main.go:141] libmachine: (no-preload-459223) DBG | About to run SSH command:
	I1105 19:11:40.982310   73496 main.go:141] libmachine: (no-preload-459223) DBG | exit 0
	I1105 19:11:41.111259   73496 main.go:141] libmachine: (no-preload-459223) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:41.111639   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetConfigRaw
	I1105 19:11:41.112368   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.114811   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115215   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.115244   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115499   73496 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/config.json ...
	I1105 19:11:41.115687   73496 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:41.115705   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:41.115900   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.118059   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118481   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.118505   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118659   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.118833   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.118959   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.119078   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.119222   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.119426   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.119442   73496 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:41.235030   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:41.235060   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235270   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:11:41.235294   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235480   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.237980   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238288   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.238327   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238405   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.238567   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238687   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238805   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.238938   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.239150   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.239163   73496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-459223 && echo "no-preload-459223" | sudo tee /etc/hostname
	I1105 19:11:41.366664   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-459223
	
	I1105 19:11:41.366693   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.369672   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.369979   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.370006   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.370147   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.370335   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370661   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.370830   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.371067   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.371086   73496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-459223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-459223/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-459223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:41.495741   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:41.495774   73496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:41.495796   73496 buildroot.go:174] setting up certificates
	I1105 19:11:41.495804   73496 provision.go:84] configureAuth start
	I1105 19:11:41.495816   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.496076   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.498948   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499377   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.499409   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499552   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.501842   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502168   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.502198   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502367   73496 provision.go:143] copyHostCerts
	I1105 19:11:41.502428   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:41.502445   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:41.502516   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:41.502662   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:41.502674   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:41.502706   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:41.502814   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:41.502825   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:41.502853   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:41.502934   73496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.no-preload-459223 san=[127.0.0.1 192.168.72.101 localhost minikube no-preload-459223]
	I1105 19:11:41.648058   73496 provision.go:177] copyRemoteCerts
	I1105 19:11:41.648115   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:41.648137   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.650915   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651274   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.651306   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.651707   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.651878   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.652032   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:41.736549   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1105 19:11:41.759352   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:41.782205   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:41.804725   73496 provision.go:87] duration metric: took 308.906806ms to configureAuth
	I1105 19:11:41.804755   73496 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:41.804930   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:41.805011   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.807634   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.808071   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.808498   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808657   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808792   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.808960   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.809113   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.809125   73496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:42.033406   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:42.033449   73496 machine.go:96] duration metric: took 917.749182ms to provisionDockerMachine
	I1105 19:11:42.033462   73496 start.go:293] postStartSetup for "no-preload-459223" (driver="kvm2")
	I1105 19:11:42.033475   73496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:42.033506   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.033853   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:42.033883   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.037259   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037688   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.037722   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037869   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.038063   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.038231   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.038361   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.126624   73496 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:42.130761   73496 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:42.130794   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:42.130881   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:42.131006   73496 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:42.131120   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:42.140978   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:42.163880   73496 start.go:296] duration metric: took 130.405487ms for postStartSetup
	I1105 19:11:42.163933   73496 fix.go:56] duration metric: took 19.580327925s for fixHost
	I1105 19:11:42.163953   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.166648   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.166994   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.167025   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.167196   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.167394   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167565   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167705   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.167856   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:42.168016   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:42.168025   73496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:42.279303   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833902.251467447
	
	I1105 19:11:42.279336   73496 fix.go:216] guest clock: 1730833902.251467447
	I1105 19:11:42.279351   73496 fix.go:229] Guest: 2024-11-05 19:11:42.251467447 +0000 UTC Remote: 2024-11-05 19:11:42.163937292 +0000 UTC m=+356.505256250 (delta=87.530155ms)
	I1105 19:11:42.279378   73496 fix.go:200] guest clock delta is within tolerance: 87.530155ms
	I1105 19:11:42.279387   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 19.695831159s
	I1105 19:11:42.279417   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.279660   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:42.282462   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.282828   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.282871   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.283018   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283439   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283580   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283669   73496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:42.283716   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.283811   73496 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:42.283838   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.286528   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286754   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286891   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.286917   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287097   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.287112   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287124   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287313   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287495   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287510   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287666   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287664   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.287769   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.398511   73496 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:42.404337   73496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:42.550196   73496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:42.555775   73496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:42.555853   73496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:42.571003   73496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:42.571031   73496 start.go:495] detecting cgroup driver to use...
	I1105 19:11:42.571123   73496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:42.586390   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:42.599887   73496 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:42.599944   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:42.613260   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:42.626371   73496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:42.736949   73496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:42.898897   73496 docker.go:233] disabling docker service ...
	I1105 19:11:42.898965   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:42.912534   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:42.925075   73496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:43.043425   73496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:43.175468   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:43.190803   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:43.210413   73496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:43.210496   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.221971   73496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:43.222064   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.232251   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.241540   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.251131   73496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:43.261218   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.270932   73496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.287905   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.297730   73496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:43.307263   73496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:43.307319   73496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:43.319421   73496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:43.328415   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:43.445798   73496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:43.532190   73496 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:43.532284   73496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:43.536931   73496 start.go:563] Will wait 60s for crictl version
	I1105 19:11:43.536986   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.540525   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:43.576428   73496 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:43.576540   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.603034   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.631229   73496 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:39.724162   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:42.224141   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:44.224609   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:43.632482   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:43.634912   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635227   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:43.635260   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635530   73496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:43.639287   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:43.650818   73496 kubeadm.go:883] updating cluster {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:43.650963   73496 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:43.651042   73496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:43.685392   73496 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:43.685421   73496 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:43.685492   73496 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.685500   73496 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.685517   73496 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.685547   73496 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.685506   73496 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.685569   73496 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.685558   73496 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.685623   73496 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.686958   73496 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.686979   73496 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.686976   73496 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.687017   73496 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.687030   73496 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.687057   73496 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1105 19:11:43.898928   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.914069   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1105 19:11:43.934388   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.940664   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.947392   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.951614   73496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1105 19:11:43.951652   73496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.951686   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.957000   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.045057   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.075256   73496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1105 19:11:44.075289   73496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1105 19:11:44.075304   73496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.075310   73496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075357   73496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1105 19:11:44.075388   73496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075417   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.075481   73496 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1105 19:11:44.075431   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075511   73496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.075543   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.102803   73496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1105 19:11:44.102856   73496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.102916   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.133582   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.133640   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.133655   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.133707   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.188042   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.188058   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.272464   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.272500   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.272467   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.272531   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.289003   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.289126   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.411162   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1105 19:11:44.411248   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.411307   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1105 19:11:44.411326   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:44.411361   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1105 19:11:44.411394   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:44.411432   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478064   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1105 19:11:44.478093   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478132   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1105 19:11:44.478152   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478178   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1105 19:11:44.478195   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1105 19:11:44.478211   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1105 19:11:44.478226   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:44.478249   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1105 19:11:44.478257   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:44.478324   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:44.889847   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.035199   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.534769   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.035551   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.535664   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.035103   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.535581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.035077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.535660   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.035462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.534898   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.962742   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.462884   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.724058   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:48.727054   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.976315   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.498135546s)
	I1105 19:11:46.976348   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1105 19:11:46.976361   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.498084867s)
	I1105 19:11:46.976386   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.498096252s)
	I1105 19:11:46.976392   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.498054417s)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1105 19:11:46.976395   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1105 19:11:46.976368   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976436   73496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.086553002s)
	I1105 19:11:46.976471   73496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1105 19:11:46.976488   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976506   73496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:46.976551   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:49.054369   73496 ssh_runner.go:235] Completed: which crictl: (2.077794607s)
	I1105 19:11:49.054455   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:49.054480   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.077976168s)
	I1105 19:11:49.054497   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1105 19:11:49.054520   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.054551   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.089648   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.509600   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455021031s)
	I1105 19:11:50.509639   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1105 19:11:50.509664   73496 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509679   73496 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.419997127s)
	I1105 19:11:50.509719   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509751   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.547301   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1105 19:11:50.547416   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:48.035320   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.535496   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.035636   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.535445   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.035499   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.535722   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.035700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.535310   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.035585   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.535468   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.962134   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.463479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.225155   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:53.723881   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:54.139987   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.592545704s)
	I1105 19:11:54.140021   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1105 19:11:54.140038   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.630297093s)
	I1105 19:11:54.140058   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1105 19:11:54.140089   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:54.140150   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:53.034919   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.535697   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.035353   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.534669   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.034957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.534747   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.035331   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.534699   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.465549   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.961291   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.725153   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:58.224417   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.887208   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.747032149s)
	I1105 19:11:55.887247   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1105 19:11:55.887278   73496 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:55.887331   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:57.753834   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.866475995s)
	I1105 19:11:57.753860   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1105 19:11:57.753879   73496 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:57.753917   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:58.605444   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1105 19:11:58.605490   73496 cache_images.go:123] Successfully loaded all cached images
	I1105 19:11:58.605498   73496 cache_images.go:92] duration metric: took 14.920064519s to LoadCachedImages
	I1105 19:11:58.605512   73496 kubeadm.go:934] updating node { 192.168.72.101 8443 v1.31.2 crio true true} ...
	I1105 19:11:58.605627   73496 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-459223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:58.605719   73496 ssh_runner.go:195] Run: crio config
	I1105 19:11:58.654396   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:11:58.654422   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:58.654432   73496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:58.654456   73496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.101 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-459223 NodeName:no-preload-459223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:58.654636   73496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-459223"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.101"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.101"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:58.654714   73496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:58.666580   73496 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:58.666659   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:58.676390   73496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:11:58.692426   73496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:58.708650   73496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1105 19:11:58.727451   73496 ssh_runner.go:195] Run: grep 192.168.72.101	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:58.731200   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:58.743437   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:58.850614   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:58.867662   73496 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223 for IP: 192.168.72.101
	I1105 19:11:58.867694   73496 certs.go:194] generating shared ca certs ...
	I1105 19:11:58.867715   73496 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:58.867896   73496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:58.867954   73496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:58.867988   73496 certs.go:256] generating profile certs ...
	I1105 19:11:58.868073   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/client.key
	I1105 19:11:58.868129   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key.0f61fe1e
	I1105 19:11:58.868163   73496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key
	I1105 19:11:58.868276   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:58.868316   73496 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:58.868323   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:58.868347   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:58.868380   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:58.868409   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:58.868450   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:58.869179   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:58.911433   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:58.947863   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:58.977511   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:59.022637   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 19:11:59.060992   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:59.086516   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:59.109616   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:59.135019   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:59.159832   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:59.184470   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:59.207138   73496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:59.224379   73496 ssh_runner.go:195] Run: openssl version
	I1105 19:11:59.230142   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:59.243624   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248086   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248157   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.253684   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:59.264169   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:59.274837   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279102   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279159   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.284540   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:59.295198   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:59.306105   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310073   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310115   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.315240   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:59.325470   73496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:59.329485   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:59.334985   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:59.340316   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:59.345717   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:59.351082   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:59.356631   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:59.361951   73496 kubeadm.go:392] StartCluster: {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:59.362047   73496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:59.362084   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.398746   73496 cri.go:89] found id: ""
	I1105 19:11:59.398819   73496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:59.408597   73496 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:59.408614   73496 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:59.408656   73496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:59.418082   73496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:59.419128   73496 kubeconfig.go:125] found "no-preload-459223" server: "https://192.168.72.101:8443"
	I1105 19:11:59.421286   73496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:59.430458   73496 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.101
	I1105 19:11:59.430490   73496 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:59.430500   73496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:59.430549   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.464047   73496 cri.go:89] found id: ""
	I1105 19:11:59.464102   73496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:59.480978   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:59.490808   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:59.490829   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:59.490871   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:59.499505   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:59.499559   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:59.508247   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:59.516942   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:59.517005   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:59.525910   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.534349   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:59.534392   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.544212   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:59.553794   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:59.553857   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:59.562739   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:59.571819   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:59.680938   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.564659   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:58.034948   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:58.534748   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.034961   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.535634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.035311   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.534756   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.035266   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.535256   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.035489   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.534701   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.963075   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.462112   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.224544   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:02.225623   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.226711   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.775338   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.844402   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.957534   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:12:00.957630   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.458375   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.958215   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.975834   73496 api_server.go:72] duration metric: took 1.018298528s to wait for apiserver process to appear ...
	I1105 19:12:01.975862   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:12:01.975884   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.774116   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.774149   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.774164   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.825378   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.825427   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.976663   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.984209   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:04.984244   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.476825   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.484608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.484644   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.975985   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.981608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.981639   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:06.476014   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:06.480296   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:12:06.487584   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:12:06.487613   73496 api_server.go:131] duration metric: took 4.511744097s to wait for apiserver health ...
	I1105 19:12:06.487623   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:12:06.487632   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:12:06.489302   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:12:03.034795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:03.534764   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.034833   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.534795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.034815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.534885   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.535327   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.035253   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.535011   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.961693   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.962003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:07.461125   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.724362   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:09.224191   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.490496   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:12:06.500809   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:12:06.529242   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:12:06.542769   73496 system_pods.go:59] 8 kube-system pods found
	I1105 19:12:06.542806   73496 system_pods.go:61] "coredns-7c65d6cfc9-9vvhj" [fde1a6e7-6807-440c-a38d-4f39ede6c11e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:12:06.542818   73496 system_pods.go:61] "etcd-no-preload-459223" [398e3fc3-6902-4cbb-bc50-a72bab461839] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:12:06.542828   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [33a306b0-a41d-4ca3-9d01-69faa7825fe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:12:06.542837   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [865ae24c-d991-4650-9e17-7242f84403e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:12:06.542844   73496 system_pods.go:61] "kube-proxy-6h584" [dd35774f-a245-42af-8fe9-bd6933ad0e30] Running
	I1105 19:12:06.542852   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [27d3685e-d548-49b6-a24d-02b1f8656c66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:12:06.542859   73496 system_pods.go:61] "metrics-server-6867b74b74-5sp2j" [7ddaa66e-b4ba-4241-8dba-5fc6ab66d777] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:12:06.542864   73496 system_pods.go:61] "storage-provisioner" [49786ba3-e9fc-45ad-9418-fd3a0a7b652c] Running
	I1105 19:12:06.542873   73496 system_pods.go:74] duration metric: took 13.603868ms to wait for pod list to return data ...
	I1105 19:12:06.542883   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:12:06.549398   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:12:06.549425   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:12:06.549435   73496 node_conditions.go:105] duration metric: took 6.546615ms to run NodePressure ...
	I1105 19:12:06.549452   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:06.812829   73496 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818052   73496 kubeadm.go:739] kubelet initialised
	I1105 19:12:06.818082   73496 kubeadm.go:740] duration metric: took 5.227942ms waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818093   73496 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:12:06.823883   73496 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.830129   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830164   73496 pod_ready.go:82] duration metric: took 6.253499ms for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.830176   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830187   73496 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.834901   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834942   73496 pod_ready.go:82] duration metric: took 4.743456ms for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.834954   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834988   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.841446   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841474   73496 pod_ready.go:82] duration metric: took 6.472942ms for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.841485   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841494   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.933972   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.933998   73496 pod_ready.go:82] duration metric: took 92.493084ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.934006   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.934012   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333443   73496 pod_ready.go:93] pod "kube-proxy-6h584" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:07.333473   73496 pod_ready.go:82] duration metric: took 399.45278ms for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333486   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:09.339907   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:08.035104   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:08.534784   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.035198   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.535319   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.035258   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.534634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.035604   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.535077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.035096   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.961614   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.962113   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.724418   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.724954   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.839467   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.839725   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.035100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:13.534793   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.035120   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.535318   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.035062   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.535127   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.034840   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.534830   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.035105   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.534928   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.961398   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.224300   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.729666   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.339542   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:17.840399   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:17.840424   73496 pod_ready.go:82] duration metric: took 10.506929493s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:17.840433   73496 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:19.846676   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.035126   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:18.535446   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.035154   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.535413   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.035580   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.534802   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.035030   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.535250   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.034785   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.534700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.460480   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.461609   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.223496   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.224908   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.847279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:24.347279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.034721   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.534672   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.035358   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.534813   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.535342   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.034934   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.534766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.035389   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.534831   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.961556   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.460682   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:25.723807   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:27.724515   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.346351   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:28.035226   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:28.535577   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.034984   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.535633   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.035509   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.534907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.535421   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.034719   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.534952   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:32.535067   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:32.575052   74485 cri.go:89] found id: ""
	I1105 19:12:32.575085   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.575096   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:32.575104   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:32.575164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:32.609969   74485 cri.go:89] found id: ""
	I1105 19:12:32.610003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.610011   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:32.610017   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:32.610065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:32.642343   74485 cri.go:89] found id: ""
	I1105 19:12:32.642369   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.642376   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:32.642381   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:32.642426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:28.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:30.960340   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.725101   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.224788   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:31.346559   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:33.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.680144   74485 cri.go:89] found id: ""
	I1105 19:12:32.680177   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.680188   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:32.680196   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:32.680270   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:32.715216   74485 cri.go:89] found id: ""
	I1105 19:12:32.715248   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.715259   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:32.715267   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:32.715321   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:32.751742   74485 cri.go:89] found id: ""
	I1105 19:12:32.751771   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.751795   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:32.751803   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:32.751865   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:32.786944   74485 cri.go:89] found id: ""
	I1105 19:12:32.787003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.787015   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:32.787023   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:32.787080   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:32.820523   74485 cri.go:89] found id: ""
	I1105 19:12:32.820550   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.820557   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:32.820565   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:32.820575   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:32.873960   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:32.874000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:32.889268   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:32.889296   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:33.011825   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:33.011846   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:33.011862   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:33.082785   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:33.082827   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:35.630678   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:35.644410   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:35.644492   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:35.679567   74485 cri.go:89] found id: ""
	I1105 19:12:35.679598   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.679607   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:35.679613   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:35.679666   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:35.713685   74485 cri.go:89] found id: ""
	I1105 19:12:35.713713   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.713721   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:35.713726   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:35.713789   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:35.749496   74485 cri.go:89] found id: ""
	I1105 19:12:35.749525   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.749536   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:35.749543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:35.749611   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:35.784228   74485 cri.go:89] found id: ""
	I1105 19:12:35.784254   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.784263   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:35.784269   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:35.784317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:35.818620   74485 cri.go:89] found id: ""
	I1105 19:12:35.818680   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.818696   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:35.818703   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:35.818769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:35.852525   74485 cri.go:89] found id: ""
	I1105 19:12:35.852554   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.852566   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:35.852574   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:35.852648   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:35.887906   74485 cri.go:89] found id: ""
	I1105 19:12:35.887931   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.887939   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:35.887944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:35.887994   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:35.920566   74485 cri.go:89] found id: ""
	I1105 19:12:35.920594   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.920602   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:35.920612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:35.920627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:35.972706   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:35.972742   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:35.986114   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:35.986141   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:36.067016   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:36.067044   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:36.067060   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:36.158947   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:36.159003   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:32.962679   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.461449   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:37.462001   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:34.724028   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:36.724174   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.728373   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.848563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.347478   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:40.347899   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.700738   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:38.713280   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:38.713351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:38.747293   74485 cri.go:89] found id: ""
	I1105 19:12:38.747335   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.747347   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:38.747355   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:38.747414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:38.781607   74485 cri.go:89] found id: ""
	I1105 19:12:38.781635   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.781643   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:38.781648   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:38.781703   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:38.815303   74485 cri.go:89] found id: ""
	I1105 19:12:38.815333   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.815342   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:38.815348   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:38.815397   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:38.850128   74485 cri.go:89] found id: ""
	I1105 19:12:38.850156   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.850166   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:38.850174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:38.850233   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:38.882470   74485 cri.go:89] found id: ""
	I1105 19:12:38.882493   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.882500   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:38.882506   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:38.882563   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:38.914669   74485 cri.go:89] found id: ""
	I1105 19:12:38.914698   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.914706   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:38.914713   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:38.914762   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:38.946521   74485 cri.go:89] found id: ""
	I1105 19:12:38.946548   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.946556   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:38.946561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:38.946613   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:38.979628   74485 cri.go:89] found id: ""
	I1105 19:12:38.979655   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.979663   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:38.979672   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:38.979682   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:39.056066   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:39.056102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.092303   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:39.092333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:39.143754   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:39.143790   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:39.156553   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:39.156587   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:39.220882   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:41.721766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:41.734823   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:41.734893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:41.768636   74485 cri.go:89] found id: ""
	I1105 19:12:41.768668   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.768685   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:41.768693   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:41.768750   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:41.809506   74485 cri.go:89] found id: ""
	I1105 19:12:41.809533   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.809541   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:41.809546   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:41.809606   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:41.849953   74485 cri.go:89] found id: ""
	I1105 19:12:41.849977   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.849985   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:41.849991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:41.850037   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:41.893042   74485 cri.go:89] found id: ""
	I1105 19:12:41.893072   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.893084   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:41.893091   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:41.893152   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:41.936259   74485 cri.go:89] found id: ""
	I1105 19:12:41.936282   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.936292   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:41.936298   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:41.936347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:41.970322   74485 cri.go:89] found id: ""
	I1105 19:12:41.970344   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.970353   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:41.970360   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:41.970427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:42.004351   74485 cri.go:89] found id: ""
	I1105 19:12:42.004375   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.004383   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:42.004388   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:42.004443   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:42.035136   74485 cri.go:89] found id: ""
	I1105 19:12:42.035163   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.035174   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:42.035185   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:42.035201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:42.086760   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:42.086801   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:42.100795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:42.100829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:42.167480   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:42.167509   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:42.167529   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:42.248625   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:42.248664   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.961606   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.461423   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:41.224956   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:43.724906   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.846509   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.847235   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.785100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:44.798182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:44.798248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:44.834080   74485 cri.go:89] found id: ""
	I1105 19:12:44.834107   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.834115   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:44.834120   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:44.834179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:44.870572   74485 cri.go:89] found id: ""
	I1105 19:12:44.870602   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.870613   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:44.870620   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:44.870691   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:44.908960   74485 cri.go:89] found id: ""
	I1105 19:12:44.908991   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.909002   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:44.909010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:44.909075   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:44.945310   74485 cri.go:89] found id: ""
	I1105 19:12:44.945342   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.945350   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:44.945355   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:44.945409   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:44.982893   74485 cri.go:89] found id: ""
	I1105 19:12:44.982935   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.982946   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:44.982953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:44.983030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:45.015529   74485 cri.go:89] found id: ""
	I1105 19:12:45.015559   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.015571   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:45.015578   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:45.015640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:45.047252   74485 cri.go:89] found id: ""
	I1105 19:12:45.047284   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.047295   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:45.047302   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:45.047364   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:45.082963   74485 cri.go:89] found id: ""
	I1105 19:12:45.083009   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.083018   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:45.083026   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:45.083039   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:45.131844   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:45.131881   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:45.145500   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:45.145530   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:45.214668   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:45.214709   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:45.214725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:45.291203   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:45.291243   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:44.963672   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.461610   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:46.223849   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:48.225352   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.346007   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:49.346691   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.831908   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:47.844873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:47.844957   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:47.881587   74485 cri.go:89] found id: ""
	I1105 19:12:47.881617   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.881628   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:47.881644   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:47.881714   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:47.918381   74485 cri.go:89] found id: ""
	I1105 19:12:47.918411   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.918423   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:47.918430   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:47.918491   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:47.950835   74485 cri.go:89] found id: ""
	I1105 19:12:47.950864   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.950880   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:47.950889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:47.950947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:47.985234   74485 cri.go:89] found id: ""
	I1105 19:12:47.985261   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.985272   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:47.985279   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:47.985338   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:48.019406   74485 cri.go:89] found id: ""
	I1105 19:12:48.019437   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.019448   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:48.019455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:48.019532   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:48.053126   74485 cri.go:89] found id: ""
	I1105 19:12:48.053160   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.053172   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:48.053180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:48.053241   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:48.086847   74485 cri.go:89] found id: ""
	I1105 19:12:48.086872   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.086879   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:48.086885   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:48.086944   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:48.122366   74485 cri.go:89] found id: ""
	I1105 19:12:48.122388   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.122396   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:48.122404   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:48.122421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:48.171579   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:48.171622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:48.185207   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:48.185234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:48.249553   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:48.249575   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:48.249586   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:48.323391   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:48.323427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:50.861939   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:50.874943   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:50.875041   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:50.911498   74485 cri.go:89] found id: ""
	I1105 19:12:50.911522   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.911530   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:50.911536   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:50.911591   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:50.946936   74485 cri.go:89] found id: ""
	I1105 19:12:50.946962   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.946988   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:50.947034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:50.947098   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:50.983220   74485 cri.go:89] found id: ""
	I1105 19:12:50.983246   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.983258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:50.983265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:50.983314   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:51.017052   74485 cri.go:89] found id: ""
	I1105 19:12:51.017078   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.017086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:51.017092   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:51.017141   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:51.051417   74485 cri.go:89] found id: ""
	I1105 19:12:51.051448   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.051459   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:51.051466   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:51.051529   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:51.085129   74485 cri.go:89] found id: ""
	I1105 19:12:51.085164   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.085177   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:51.085182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:51.085232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:51.122065   74485 cri.go:89] found id: ""
	I1105 19:12:51.122100   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.122113   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:51.122120   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:51.122178   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:51.154909   74485 cri.go:89] found id: ""
	I1105 19:12:51.154938   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.154946   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:51.154954   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:51.154966   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:51.167768   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:51.167798   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:51.231849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:51.231873   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:51.231897   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:51.314426   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:51.314487   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:51.356654   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:51.356685   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:49.961294   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.461707   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:50.723534   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.723821   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:51.347677   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.847328   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.911774   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:53.924884   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:53.924968   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:53.957690   74485 cri.go:89] found id: ""
	I1105 19:12:53.957719   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.957729   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:53.957737   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:53.957802   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:53.990717   74485 cri.go:89] found id: ""
	I1105 19:12:53.990744   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.990751   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:53.990757   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:53.990803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:54.023229   74485 cri.go:89] found id: ""
	I1105 19:12:54.023251   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.023258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:54.023263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:54.023320   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:54.056950   74485 cri.go:89] found id: ""
	I1105 19:12:54.056977   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.056987   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:54.056995   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:54.057056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:54.091729   74485 cri.go:89] found id: ""
	I1105 19:12:54.091756   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.091768   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:54.091776   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:54.091828   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:54.123964   74485 cri.go:89] found id: ""
	I1105 19:12:54.123991   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.124001   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:54.124009   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:54.124070   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:54.155164   74485 cri.go:89] found id: ""
	I1105 19:12:54.155195   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.155204   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:54.155209   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:54.155268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:54.188161   74485 cri.go:89] found id: ""
	I1105 19:12:54.188191   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.188202   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:54.188213   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:54.188226   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:54.240906   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:54.240941   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:54.254061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:54.254093   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:54.321973   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:54.322007   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:54.322026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:54.405106   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:54.405147   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:56.941801   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:56.954658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:56.954741   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:56.990372   74485 cri.go:89] found id: ""
	I1105 19:12:56.990400   74485 logs.go:282] 0 containers: []
	W1105 19:12:56.990411   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:56.990419   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:56.990479   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:57.023047   74485 cri.go:89] found id: ""
	I1105 19:12:57.023082   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.023093   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:57.023102   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:57.023163   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:57.054991   74485 cri.go:89] found id: ""
	I1105 19:12:57.055021   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.055030   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:57.055036   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:57.055094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:57.086182   74485 cri.go:89] found id: ""
	I1105 19:12:57.086214   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.086225   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:57.086233   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:57.086295   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:57.120322   74485 cri.go:89] found id: ""
	I1105 19:12:57.120350   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.120361   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:57.120368   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:57.120431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:57.153751   74485 cri.go:89] found id: ""
	I1105 19:12:57.153781   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.153790   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:57.153796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:57.153845   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:57.189208   74485 cri.go:89] found id: ""
	I1105 19:12:57.189234   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.189244   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:57.189251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:57.189317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:57.223259   74485 cri.go:89] found id: ""
	I1105 19:12:57.223292   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.223301   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:57.223308   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:57.223320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:57.273063   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:57.273098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:57.287759   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:57.287783   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:57.353387   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:57.353409   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:57.353421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:57.426374   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:57.426411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:54.462191   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.960479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:54.723926   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.724988   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.224704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:55.847609   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:58.347062   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.348243   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.965907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:59.979081   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:59.979149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:00.010955   74485 cri.go:89] found id: ""
	I1105 19:13:00.011001   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.011012   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:00.011021   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:00.011081   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:00.044800   74485 cri.go:89] found id: ""
	I1105 19:13:00.044825   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.044832   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:00.044838   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:00.044894   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:00.082999   74485 cri.go:89] found id: ""
	I1105 19:13:00.083040   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.083050   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:00.083059   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:00.083125   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:00.120792   74485 cri.go:89] found id: ""
	I1105 19:13:00.120826   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.120835   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:00.120840   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:00.120903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:00.153156   74485 cri.go:89] found id: ""
	I1105 19:13:00.153188   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.153200   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:00.153207   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:00.153273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:00.189039   74485 cri.go:89] found id: ""
	I1105 19:13:00.189066   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.189073   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:00.189079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:00.189143   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:00.220904   74485 cri.go:89] found id: ""
	I1105 19:13:00.220932   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.220942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:00.220950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:00.221012   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:00.255414   74485 cri.go:89] found id: ""
	I1105 19:13:00.255443   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.255454   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:00.255464   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:00.255480   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:00.329027   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:00.329050   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:00.329061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:00.405813   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:00.405847   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:00.443302   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:00.443332   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:00.498413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:00.498452   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:58.960870   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.962098   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:01.723865   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.724945   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:02.846369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:04.846751   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.011897   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:03.025351   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:03.025419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:03.058881   74485 cri.go:89] found id: ""
	I1105 19:13:03.058910   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.058920   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:03.058928   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:03.059018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:03.093549   74485 cri.go:89] found id: ""
	I1105 19:13:03.093580   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.093592   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:03.093600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:03.093660   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:03.132355   74485 cri.go:89] found id: ""
	I1105 19:13:03.132384   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.132395   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:03.132402   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:03.132463   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:03.164832   74485 cri.go:89] found id: ""
	I1105 19:13:03.164864   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.164875   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:03.164888   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:03.164947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:03.203187   74485 cri.go:89] found id: ""
	I1105 19:13:03.203213   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.203221   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:03.203226   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:03.203282   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:03.238867   74485 cri.go:89] found id: ""
	I1105 19:13:03.238899   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.238921   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:03.238928   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:03.239010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:03.276139   74485 cri.go:89] found id: ""
	I1105 19:13:03.276174   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.276187   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:03.276195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:03.276251   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:03.312588   74485 cri.go:89] found id: ""
	I1105 19:13:03.312613   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.312631   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:03.312639   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:03.312650   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:03.379754   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:03.379782   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:03.379797   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:03.455719   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:03.455754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.493428   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:03.493458   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:03.545447   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:03.545481   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.060213   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:06.074756   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:06.074831   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:06.111392   74485 cri.go:89] found id: ""
	I1105 19:13:06.111421   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.111429   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:06.111435   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:06.111493   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:06.147535   74485 cri.go:89] found id: ""
	I1105 19:13:06.147568   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.147579   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:06.147585   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:06.147646   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:06.183176   74485 cri.go:89] found id: ""
	I1105 19:13:06.183198   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.183205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:06.183211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:06.183262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:06.213957   74485 cri.go:89] found id: ""
	I1105 19:13:06.213983   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.213992   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:06.213997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:06.214060   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:06.251199   74485 cri.go:89] found id: ""
	I1105 19:13:06.251227   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.251234   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:06.251240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:06.251297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:06.288128   74485 cri.go:89] found id: ""
	I1105 19:13:06.288157   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.288167   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:06.288174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:06.288236   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:06.325265   74485 cri.go:89] found id: ""
	I1105 19:13:06.325296   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.325306   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:06.325314   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:06.325375   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:06.359649   74485 cri.go:89] found id: ""
	I1105 19:13:06.359689   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.359700   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:06.359710   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:06.359725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:06.408423   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:06.408456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.421776   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:06.421804   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:06.487464   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:06.487493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:06.487507   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:06.565789   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:06.565829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.461192   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.725002   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:08.225146   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:07.346498   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.347264   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.104578   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:09.117930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:09.118022   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:09.156055   74485 cri.go:89] found id: ""
	I1105 19:13:09.156083   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.156093   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:09.156101   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:09.156161   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:09.190470   74485 cri.go:89] found id: ""
	I1105 19:13:09.190499   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.190509   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:09.190516   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:09.190576   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:09.222568   74485 cri.go:89] found id: ""
	I1105 19:13:09.222595   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.222606   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:09.222612   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:09.222677   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:09.260251   74485 cri.go:89] found id: ""
	I1105 19:13:09.260282   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.260292   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:09.260300   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:09.260362   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:09.296006   74485 cri.go:89] found id: ""
	I1105 19:13:09.296036   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.296047   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:09.296054   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:09.296118   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:09.331213   74485 cri.go:89] found id: ""
	I1105 19:13:09.331246   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.331257   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:09.331265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:09.331333   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:09.364286   74485 cri.go:89] found id: ""
	I1105 19:13:09.364316   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.364327   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:09.364335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:09.364445   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:09.398060   74485 cri.go:89] found id: ""
	I1105 19:13:09.398084   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.398092   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:09.398101   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:09.398113   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:09.447373   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:09.447409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:09.461483   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:09.461514   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:09.528213   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:09.528236   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:09.528248   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:09.607397   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:09.607430   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.146158   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:12.159183   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:12.159262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:12.193917   74485 cri.go:89] found id: ""
	I1105 19:13:12.193952   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.193963   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:12.193971   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:12.194036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:12.226558   74485 cri.go:89] found id: ""
	I1105 19:13:12.226585   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.226594   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:12.226600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:12.226662   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:12.258437   74485 cri.go:89] found id: ""
	I1105 19:13:12.258469   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.258481   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:12.258488   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:12.258557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:12.291308   74485 cri.go:89] found id: ""
	I1105 19:13:12.291341   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.291353   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:12.291361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:12.291431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:12.325768   74485 cri.go:89] found id: ""
	I1105 19:13:12.325801   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.325812   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:12.325819   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:12.325884   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:12.361077   74485 cri.go:89] found id: ""
	I1105 19:13:12.361100   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.361108   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:12.361118   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:12.361179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:12.394769   74485 cri.go:89] found id: ""
	I1105 19:13:12.394791   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.394800   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:12.394806   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:12.394864   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:12.430138   74485 cri.go:89] found id: ""
	I1105 19:13:12.430167   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.430177   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:12.430189   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:12.430200   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.472596   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:12.472637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:12.523107   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:12.523143   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:12.535797   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:12.535824   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:12.604088   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:12.604108   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:12.604123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:08.460647   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.462830   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.225468   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.225693   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:11.849320   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.347487   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:15.185725   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:15.200158   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:15.200238   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:15.238309   74485 cri.go:89] found id: ""
	I1105 19:13:15.238334   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.238342   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:15.238349   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:15.238404   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:15.272897   74485 cri.go:89] found id: ""
	I1105 19:13:15.272927   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.272938   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:15.272945   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:15.273013   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:15.307700   74485 cri.go:89] found id: ""
	I1105 19:13:15.307726   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.307737   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:15.307744   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:15.307810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:15.340156   74485 cri.go:89] found id: ""
	I1105 19:13:15.340182   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.340196   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:15.340202   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:15.340252   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:15.375930   74485 cri.go:89] found id: ""
	I1105 19:13:15.375963   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.375971   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:15.375976   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:15.376031   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:15.409876   74485 cri.go:89] found id: ""
	I1105 19:13:15.409905   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.409915   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:15.409922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:15.409984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:15.442781   74485 cri.go:89] found id: ""
	I1105 19:13:15.442808   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.442819   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:15.442825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:15.442896   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:15.480578   74485 cri.go:89] found id: ""
	I1105 19:13:15.480606   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.480614   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:15.480623   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:15.480634   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:15.530910   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:15.530952   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:15.544351   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:15.544382   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:15.618345   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:15.618373   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:15.618396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:15.704408   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:15.704451   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:14.961408   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.961486   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.724130   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.724204   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.724704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.347818   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.846423   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.244882   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:18.258667   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:18.258758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:18.292140   74485 cri.go:89] found id: ""
	I1105 19:13:18.292163   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.292171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:18.292178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:18.292235   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:18.324954   74485 cri.go:89] found id: ""
	I1105 19:13:18.324979   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.324985   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:18.324991   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:18.325048   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:18.361943   74485 cri.go:89] found id: ""
	I1105 19:13:18.361972   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.361983   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:18.361991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:18.362062   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:18.396012   74485 cri.go:89] found id: ""
	I1105 19:13:18.396036   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.396044   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:18.396050   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:18.396097   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:18.428852   74485 cri.go:89] found id: ""
	I1105 19:13:18.428875   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.428883   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:18.428889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:18.428946   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:18.464364   74485 cri.go:89] found id: ""
	I1105 19:13:18.464390   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.464397   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:18.464404   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:18.464464   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:18.496478   74485 cri.go:89] found id: ""
	I1105 19:13:18.496505   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.496514   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:18.496519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:18.496577   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:18.530313   74485 cri.go:89] found id: ""
	I1105 19:13:18.530339   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.530348   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:18.530356   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:18.530368   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:18.582593   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:18.582627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:18.596580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:18.596616   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:18.663920   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:18.663959   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:18.663974   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:18.740706   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:18.740746   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.281614   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:21.295841   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:21.295919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:21.330832   74485 cri.go:89] found id: ""
	I1105 19:13:21.330856   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.330864   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:21.330869   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:21.330922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:21.365228   74485 cri.go:89] found id: ""
	I1105 19:13:21.365257   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.365265   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:21.365269   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:21.365317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:21.418675   74485 cri.go:89] found id: ""
	I1105 19:13:21.418702   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.418719   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:21.418727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:21.418793   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:21.453966   74485 cri.go:89] found id: ""
	I1105 19:13:21.453994   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.454003   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:21.454008   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:21.454058   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:21.492030   74485 cri.go:89] found id: ""
	I1105 19:13:21.492056   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.492067   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:21.492078   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:21.492128   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:21.529146   74485 cri.go:89] found id: ""
	I1105 19:13:21.529174   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.529183   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:21.529190   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:21.529250   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:21.566491   74485 cri.go:89] found id: ""
	I1105 19:13:21.566519   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.566528   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:21.566533   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:21.566595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:21.605720   74485 cri.go:89] found id: ""
	I1105 19:13:21.605745   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.605754   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:21.605762   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:21.605772   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:21.682385   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:21.682408   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:21.682420   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:21.764519   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:21.764557   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.805090   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:21.805117   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:21.857560   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:21.857593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:19.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.961995   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.224702   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.226864   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:20.850915   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.346819   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.347230   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:24.371420   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:24.384566   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:24.384634   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:24.416283   74485 cri.go:89] found id: ""
	I1105 19:13:24.416308   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.416319   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:24.416327   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:24.416388   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:24.452875   74485 cri.go:89] found id: ""
	I1105 19:13:24.452899   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.452907   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:24.452913   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:24.452964   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:24.489946   74485 cri.go:89] found id: ""
	I1105 19:13:24.489974   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.489992   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:24.490000   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:24.490056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:24.527348   74485 cri.go:89] found id: ""
	I1105 19:13:24.527377   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.527388   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:24.527395   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:24.527451   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:24.558992   74485 cri.go:89] found id: ""
	I1105 19:13:24.559024   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.559035   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:24.559047   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:24.559105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:24.591405   74485 cri.go:89] found id: ""
	I1105 19:13:24.591437   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.591448   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:24.591455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:24.591516   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.625002   74485 cri.go:89] found id: ""
	I1105 19:13:24.625031   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.625040   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:24.625048   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:24.625114   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:24.657867   74485 cri.go:89] found id: ""
	I1105 19:13:24.657896   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.657907   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:24.657918   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:24.657931   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:24.708444   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:24.708482   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:24.721771   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:24.721814   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:24.793946   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:24.793980   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:24.793996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:24.875130   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:24.875167   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:27.412872   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:27.426996   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:27.427072   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:27.462434   74485 cri.go:89] found id: ""
	I1105 19:13:27.462458   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.462468   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:27.462475   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:27.462536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:27.496916   74485 cri.go:89] found id: ""
	I1105 19:13:27.496951   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.496962   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:27.496969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:27.497035   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:27.528826   74485 cri.go:89] found id: ""
	I1105 19:13:27.528853   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.528861   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:27.528867   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:27.528919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:27.563164   74485 cri.go:89] found id: ""
	I1105 19:13:27.563193   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.563204   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:27.563210   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:27.563284   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:27.600136   74485 cri.go:89] found id: ""
	I1105 19:13:27.600164   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.600174   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:27.600180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:27.600247   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:27.634326   74485 cri.go:89] found id: ""
	I1105 19:13:27.634358   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.634368   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:27.634377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:27.634452   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.462295   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:26.961567   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.723935   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.725498   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.847362   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.349542   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.668154   74485 cri.go:89] found id: ""
	I1105 19:13:27.668185   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.668196   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:27.668203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:27.668263   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:27.706016   74485 cri.go:89] found id: ""
	I1105 19:13:27.706043   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.706051   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:27.706059   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:27.706071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:27.755890   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:27.755929   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:27.773038   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:27.773063   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:27.863392   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:27.863414   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:27.863429   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:27.949149   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:27.949185   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.489333   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:30.502794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:30.502878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:30.536263   74485 cri.go:89] found id: ""
	I1105 19:13:30.536289   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.536297   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:30.536302   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:30.536347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:30.570418   74485 cri.go:89] found id: ""
	I1105 19:13:30.570445   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.570455   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:30.570462   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:30.570523   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:30.601972   74485 cri.go:89] found id: ""
	I1105 19:13:30.602003   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.602013   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:30.602020   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:30.602086   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:30.634151   74485 cri.go:89] found id: ""
	I1105 19:13:30.634183   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.634195   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:30.634203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:30.634265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:30.666384   74485 cri.go:89] found id: ""
	I1105 19:13:30.666415   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.666425   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:30.666433   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:30.666498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:30.699587   74485 cri.go:89] found id: ""
	I1105 19:13:30.699619   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.699631   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:30.699639   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:30.699699   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:30.731917   74485 cri.go:89] found id: ""
	I1105 19:13:30.731972   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.731983   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:30.731990   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:30.732051   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:30.768807   74485 cri.go:89] found id: ""
	I1105 19:13:30.768832   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.768840   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:30.768849   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:30.768860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:30.848594   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:30.848626   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.889031   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:30.889067   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:30.940550   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:30.940588   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:30.953810   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:30.953845   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:31.023633   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:29.461686   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:31.961484   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.225024   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.723965   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.847298   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:35.347135   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:33.524150   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:33.539025   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:33.539112   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:33.584756   74485 cri.go:89] found id: ""
	I1105 19:13:33.584786   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.584799   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:33.584807   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:33.584869   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:33.624785   74485 cri.go:89] found id: ""
	I1105 19:13:33.624816   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.624829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:33.624836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:33.625025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:33.668750   74485 cri.go:89] found id: ""
	I1105 19:13:33.668783   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.668794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:33.668804   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:33.668867   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:33.701675   74485 cri.go:89] found id: ""
	I1105 19:13:33.701707   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.701735   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:33.701743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:33.701817   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:33.737368   74485 cri.go:89] found id: ""
	I1105 19:13:33.737393   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.737401   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:33.737407   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:33.737458   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:33.770589   74485 cri.go:89] found id: ""
	I1105 19:13:33.770620   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.770630   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:33.770638   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:33.770704   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:33.802635   74485 cri.go:89] found id: ""
	I1105 19:13:33.802668   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.802680   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:33.802687   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:33.802751   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:33.839274   74485 cri.go:89] found id: ""
	I1105 19:13:33.839301   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.839309   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:33.839317   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:33.839328   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:33.881049   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:33.881090   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:33.932704   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:33.932743   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:33.945979   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:33.946007   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:34.017355   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:34.017375   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:34.017390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:36.596284   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:36.608240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:36.608306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:36.641846   74485 cri.go:89] found id: ""
	I1105 19:13:36.641878   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.641887   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:36.641901   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:36.641966   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:36.676553   74485 cri.go:89] found id: ""
	I1105 19:13:36.676584   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.676595   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:36.676602   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:36.676669   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:36.711931   74485 cri.go:89] found id: ""
	I1105 19:13:36.711961   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.711972   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:36.711980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:36.712042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:36.748510   74485 cri.go:89] found id: ""
	I1105 19:13:36.748534   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.748542   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:36.748547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:36.748596   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:36.781869   74485 cri.go:89] found id: ""
	I1105 19:13:36.781899   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.781912   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:36.781922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:36.781983   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:36.816574   74485 cri.go:89] found id: ""
	I1105 19:13:36.816597   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.816605   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:36.816610   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:36.816658   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:36.852894   74485 cri.go:89] found id: ""
	I1105 19:13:36.852921   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.852928   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:36.852934   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:36.852996   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:36.891732   74485 cri.go:89] found id: ""
	I1105 19:13:36.891764   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.891783   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:36.891795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:36.891810   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:36.964948   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:36.964972   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:36.964987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:37.043727   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:37.043765   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:37.084306   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:37.084333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:37.133238   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:37.133274   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:34.461773   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:36.960440   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:34.724805   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.224830   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.227912   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.347383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.347770   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.647492   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:39.659944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:39.660025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:39.695382   74485 cri.go:89] found id: ""
	I1105 19:13:39.695405   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.695415   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:39.695422   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:39.695480   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:39.731807   74485 cri.go:89] found id: ""
	I1105 19:13:39.731833   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.731841   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:39.731846   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:39.731895   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:39.766913   74485 cri.go:89] found id: ""
	I1105 19:13:39.766945   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.766955   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:39.766963   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:39.767049   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:39.800265   74485 cri.go:89] found id: ""
	I1105 19:13:39.800288   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.800296   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:39.800301   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:39.800346   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:39.832753   74485 cri.go:89] found id: ""
	I1105 19:13:39.832781   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.832789   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:39.832794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:39.832843   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:39.865950   74485 cri.go:89] found id: ""
	I1105 19:13:39.865980   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.865990   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:39.865997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:39.866046   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:39.902918   74485 cri.go:89] found id: ""
	I1105 19:13:39.902948   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.902957   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:39.902962   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:39.903039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:39.935086   74485 cri.go:89] found id: ""
	I1105 19:13:39.935117   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.935129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:39.935139   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:39.935152   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:39.997935   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:39.997961   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:39.997976   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:40.076794   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:40.076852   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:40.114178   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:40.114209   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:40.163512   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:40.163550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:38.961003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:40.962241   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.724237   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:43.725317   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.847149   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:44.346097   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:42.676843   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:42.689855   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:42.689930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:42.724108   74485 cri.go:89] found id: ""
	I1105 19:13:42.724139   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.724148   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:42.724156   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:42.724218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:42.760816   74485 cri.go:89] found id: ""
	I1105 19:13:42.760844   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.760854   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:42.760861   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:42.760924   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:42.795111   74485 cri.go:89] found id: ""
	I1105 19:13:42.795134   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.795142   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:42.795147   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:42.795195   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:42.832964   74485 cri.go:89] found id: ""
	I1105 19:13:42.832988   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.832997   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:42.833003   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:42.833065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:42.868817   74485 cri.go:89] found id: ""
	I1105 19:13:42.868848   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.868858   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:42.868865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:42.868933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:42.902015   74485 cri.go:89] found id: ""
	I1105 19:13:42.902044   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.902051   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:42.902056   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:42.902146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:42.934298   74485 cri.go:89] found id: ""
	I1105 19:13:42.934322   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.934330   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:42.934335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:42.934385   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:42.969804   74485 cri.go:89] found id: ""
	I1105 19:13:42.969831   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.969843   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:42.969854   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:42.969873   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:43.019922   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:43.019959   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:43.033594   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:43.033622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:43.108220   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:43.108240   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:43.108251   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:43.191946   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:43.191987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:45.730728   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:45.743344   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:45.743419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:45.777693   74485 cri.go:89] found id: ""
	I1105 19:13:45.777728   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.777739   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:45.777747   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:45.777810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:45.810195   74485 cri.go:89] found id: ""
	I1105 19:13:45.810222   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.810233   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:45.810240   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:45.810308   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:45.851210   74485 cri.go:89] found id: ""
	I1105 19:13:45.851240   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.851247   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:45.851252   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:45.851311   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:45.885501   74485 cri.go:89] found id: ""
	I1105 19:13:45.885531   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.885540   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:45.885546   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:45.885595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:45.921638   74485 cri.go:89] found id: ""
	I1105 19:13:45.921667   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.921676   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:45.921684   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:45.921745   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:45.954341   74485 cri.go:89] found id: ""
	I1105 19:13:45.954373   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.954384   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:45.954394   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:45.954461   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:45.988840   74485 cri.go:89] found id: ""
	I1105 19:13:45.988865   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.988873   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:45.988879   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:45.988949   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:46.025409   74485 cri.go:89] found id: ""
	I1105 19:13:46.025441   74485 logs.go:282] 0 containers: []
	W1105 19:13:46.025458   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:46.025470   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:46.025486   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:46.037763   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:46.037787   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:46.112619   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:46.112663   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:46.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:46.192165   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:46.192199   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:46.233235   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:46.233263   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:42.962569   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:45.461256   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:47.461781   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.225004   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.723774   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.346687   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.787685   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:48.800681   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:48.800749   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:48.835344   74485 cri.go:89] found id: ""
	I1105 19:13:48.835366   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.835374   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:48.835383   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:48.835429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:48.867447   74485 cri.go:89] found id: ""
	I1105 19:13:48.867474   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.867483   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:48.867488   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:48.867536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:48.899135   74485 cri.go:89] found id: ""
	I1105 19:13:48.899160   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.899167   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:48.899172   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:48.899221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:48.932208   74485 cri.go:89] found id: ""
	I1105 19:13:48.932243   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.932255   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:48.932263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:48.932326   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:48.967174   74485 cri.go:89] found id: ""
	I1105 19:13:48.967202   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.967210   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:48.967215   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:48.967267   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:48.998902   74485 cri.go:89] found id: ""
	I1105 19:13:48.998932   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.998942   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:48.998950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:48.999030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:49.030946   74485 cri.go:89] found id: ""
	I1105 19:13:49.030988   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.030999   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:49.031006   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:49.031074   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:49.063489   74485 cri.go:89] found id: ""
	I1105 19:13:49.063517   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.063528   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:49.063540   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:49.063555   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:49.116433   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:49.116477   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:49.131439   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:49.131476   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:49.199770   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:49.199795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:49.199809   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:49.275503   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:49.275543   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:51.816208   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:51.829328   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:51.829399   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:51.863320   74485 cri.go:89] found id: ""
	I1105 19:13:51.863346   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.863354   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:51.863359   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:51.863406   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:51.896589   74485 cri.go:89] found id: ""
	I1105 19:13:51.896618   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.896628   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:51.896635   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:51.896697   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:51.933744   74485 cri.go:89] found id: ""
	I1105 19:13:51.933769   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.933776   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:51.933781   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:51.933829   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:51.970806   74485 cri.go:89] found id: ""
	I1105 19:13:51.970829   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.970836   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:51.970842   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:51.970889   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:52.004087   74485 cri.go:89] found id: ""
	I1105 19:13:52.004116   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.004124   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:52.004129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:52.004186   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:52.041721   74485 cri.go:89] found id: ""
	I1105 19:13:52.041752   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.041763   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:52.041771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:52.041835   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:52.079253   74485 cri.go:89] found id: ""
	I1105 19:13:52.079277   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.079285   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:52.079292   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:52.079351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:52.112604   74485 cri.go:89] found id: ""
	I1105 19:13:52.112642   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.112653   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:52.112664   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:52.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:52.160799   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:52.160841   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:52.174323   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:52.174355   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:52.247358   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:52.247383   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:52.247395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:52.326071   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:52.326108   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:49.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.461239   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.724514   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.724742   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.848418   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:53.346329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.347199   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:54.866454   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:54.879015   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:54.879093   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:54.911729   74485 cri.go:89] found id: ""
	I1105 19:13:54.911765   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.911777   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:54.911785   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:54.911846   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:54.943137   74485 cri.go:89] found id: ""
	I1105 19:13:54.943169   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.943185   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:54.943193   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:54.943253   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:54.977951   74485 cri.go:89] found id: ""
	I1105 19:13:54.977980   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.977991   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:54.977998   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:54.978061   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:55.009453   74485 cri.go:89] found id: ""
	I1105 19:13:55.009478   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.009486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:55.009491   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:55.009537   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:55.040790   74485 cri.go:89] found id: ""
	I1105 19:13:55.040814   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.040821   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:55.040827   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:55.040878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:55.073401   74485 cri.go:89] found id: ""
	I1105 19:13:55.073430   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.073441   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:55.073449   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:55.073508   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:55.105419   74485 cri.go:89] found id: ""
	I1105 19:13:55.105443   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.105451   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:55.105456   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:55.105511   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:55.137363   74485 cri.go:89] found id: ""
	I1105 19:13:55.137395   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.137406   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:55.137416   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:55.137431   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:55.174176   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:55.174201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:55.221658   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:55.221693   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:55.235044   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:55.235070   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:55.308192   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:55.308218   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:55.308234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:54.461424   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:56.961198   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.223920   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.224915   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.847329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:00.347371   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.892462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:57.905472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:57.905543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:57.946044   74485 cri.go:89] found id: ""
	I1105 19:13:57.946071   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.946081   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:57.946089   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:57.946149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:57.980762   74485 cri.go:89] found id: ""
	I1105 19:13:57.980791   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.980803   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:57.980811   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:57.980874   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:58.013351   74485 cri.go:89] found id: ""
	I1105 19:13:58.013374   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.013381   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:58.013386   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:58.013433   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:58.049056   74485 cri.go:89] found id: ""
	I1105 19:13:58.049083   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.049091   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:58.049097   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:58.049147   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:58.081476   74485 cri.go:89] found id: ""
	I1105 19:13:58.081507   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.081517   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:58.081524   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:58.081583   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:58.114526   74485 cri.go:89] found id: ""
	I1105 19:13:58.114554   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.114564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:58.114571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:58.114630   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:58.148219   74485 cri.go:89] found id: ""
	I1105 19:13:58.148243   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.148252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:58.148257   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:58.148312   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:58.183254   74485 cri.go:89] found id: ""
	I1105 19:13:58.183277   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.183285   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:58.183292   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:58.183304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:58.234747   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:58.234785   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:58.248269   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:58.248300   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:58.313290   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:58.313312   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:58.313327   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:58.389847   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:58.389889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:00.927957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:00.941525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:00.941593   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:00.974891   74485 cri.go:89] found id: ""
	I1105 19:14:00.974920   74485 logs.go:282] 0 containers: []
	W1105 19:14:00.974931   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:00.974938   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:00.975018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:01.008224   74485 cri.go:89] found id: ""
	I1105 19:14:01.008250   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.008262   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:01.008270   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:01.008328   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:01.044514   74485 cri.go:89] found id: ""
	I1105 19:14:01.044545   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.044553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:01.044559   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:01.044614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:01.077091   74485 cri.go:89] found id: ""
	I1105 19:14:01.077124   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.077135   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:01.077141   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:01.077197   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:01.109947   74485 cri.go:89] found id: ""
	I1105 19:14:01.109976   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.109986   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:01.109994   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:01.110054   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:01.146162   74485 cri.go:89] found id: ""
	I1105 19:14:01.146193   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.146203   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:01.146211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:01.146275   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:01.180335   74485 cri.go:89] found id: ""
	I1105 19:14:01.180360   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.180370   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:01.180377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:01.180436   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:01.216160   74485 cri.go:89] found id: ""
	I1105 19:14:01.216189   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.216199   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:01.216221   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:01.216236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:01.229426   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:01.229455   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:01.298847   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:01.298874   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:01.298889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:01.375255   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:01.375299   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:01.417946   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:01.418026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:59.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.961362   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:59.724103   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.724976   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.725344   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:02.349032   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:04.847734   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.973713   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:03.987128   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:03.987198   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:04.020050   74485 cri.go:89] found id: ""
	I1105 19:14:04.020081   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.020091   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:04.020098   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:04.020164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:04.053458   74485 cri.go:89] found id: ""
	I1105 19:14:04.053485   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.053492   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:04.053498   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:04.053544   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:04.086417   74485 cri.go:89] found id: ""
	I1105 19:14:04.086442   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.086455   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:04.086461   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:04.086513   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:04.122035   74485 cri.go:89] found id: ""
	I1105 19:14:04.122059   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.122067   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:04.122073   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:04.122120   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:04.158732   74485 cri.go:89] found id: ""
	I1105 19:14:04.158758   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.158765   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:04.158771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:04.158822   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:04.190497   74485 cri.go:89] found id: ""
	I1105 19:14:04.190525   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.190536   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:04.190543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:04.190604   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:04.222040   74485 cri.go:89] found id: ""
	I1105 19:14:04.222066   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.222074   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:04.222079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:04.222131   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:04.258753   74485 cri.go:89] found id: ""
	I1105 19:14:04.258781   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.258793   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:04.258804   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:04.258819   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:04.299966   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:04.300052   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:04.355364   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:04.355395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:04.368954   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:04.368980   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:04.431658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:04.431688   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:04.431700   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.015289   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:07.029580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:07.029644   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:07.066931   74485 cri.go:89] found id: ""
	I1105 19:14:07.066964   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.066993   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:07.067004   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:07.067059   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:07.104315   74485 cri.go:89] found id: ""
	I1105 19:14:07.104341   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.104349   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:07.104354   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:07.104401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:07.141271   74485 cri.go:89] found id: ""
	I1105 19:14:07.141298   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.141305   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:07.141311   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:07.141360   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:07.174600   74485 cri.go:89] found id: ""
	I1105 19:14:07.174631   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.174643   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:07.174653   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:07.174707   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:07.211920   74485 cri.go:89] found id: ""
	I1105 19:14:07.211958   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.211969   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:07.211975   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:07.212027   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:07.248238   74485 cri.go:89] found id: ""
	I1105 19:14:07.248269   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.248280   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:07.248286   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:07.248344   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:07.279833   74485 cri.go:89] found id: ""
	I1105 19:14:07.279864   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.279874   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:07.279881   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:07.279931   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:07.317411   74485 cri.go:89] found id: ""
	I1105 19:14:07.317441   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.317452   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:07.317461   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:07.317474   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:07.390499   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:07.390535   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:07.390556   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.488858   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:07.488895   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:07.528612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:07.528645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:07.581884   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:07.581927   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:03.961433   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.460953   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.223402   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:08.723797   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:07.348258   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:09.846465   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.096089   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:10.110828   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:10.110898   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:10.147299   74485 cri.go:89] found id: ""
	I1105 19:14:10.147332   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.147344   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:10.147350   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:10.147401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:10.181457   74485 cri.go:89] found id: ""
	I1105 19:14:10.181482   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.181489   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:10.181495   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:10.181540   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:10.215210   74485 cri.go:89] found id: ""
	I1105 19:14:10.215241   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.215252   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:10.215259   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:10.215319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:10.249587   74485 cri.go:89] found id: ""
	I1105 19:14:10.249609   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.249617   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:10.249625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:10.249679   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:10.282566   74485 cri.go:89] found id: ""
	I1105 19:14:10.282591   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.282598   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:10.282604   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:10.282672   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:10.314312   74485 cri.go:89] found id: ""
	I1105 19:14:10.314344   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.314355   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:10.314361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:10.314415   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:10.346988   74485 cri.go:89] found id: ""
	I1105 19:14:10.347016   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.347028   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:10.347035   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:10.347088   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:10.381326   74485 cri.go:89] found id: ""
	I1105 19:14:10.381354   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.381370   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:10.381380   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:10.381394   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:10.418311   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:10.418344   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:10.469559   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:10.469590   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:10.482394   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:10.482427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:10.551831   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:10.551854   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:10.551870   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:08.462072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.961478   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:12.724974   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:11.846737   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:14.346050   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:13.127576   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:13.143182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:13.143242   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:13.188794   74485 cri.go:89] found id: ""
	I1105 19:14:13.188827   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.188839   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:13.188846   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:13.188897   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:13.221790   74485 cri.go:89] found id: ""
	I1105 19:14:13.221818   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.221829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:13.221836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:13.221893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:13.255164   74485 cri.go:89] found id: ""
	I1105 19:14:13.255194   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.255205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:13.255212   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:13.255272   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:13.288203   74485 cri.go:89] found id: ""
	I1105 19:14:13.288231   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.288241   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:13.288249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:13.288307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:13.321438   74485 cri.go:89] found id: ""
	I1105 19:14:13.321463   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.321475   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:13.321482   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:13.321541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:13.361858   74485 cri.go:89] found id: ""
	I1105 19:14:13.361886   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.361897   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:13.361905   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:13.361979   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:13.394210   74485 cri.go:89] found id: ""
	I1105 19:14:13.394239   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.394252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:13.394260   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:13.394324   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:13.434665   74485 cri.go:89] found id: ""
	I1105 19:14:13.434697   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.434705   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:13.434712   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:13.434724   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:13.447849   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:13.447875   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:13.514353   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:13.514377   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:13.514390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:13.590746   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:13.590784   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:13.627704   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:13.627732   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:16.180171   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:16.193282   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:16.193342   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:16.230087   74485 cri.go:89] found id: ""
	I1105 19:14:16.230118   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.230128   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:16.230137   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:16.230200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:16.264315   74485 cri.go:89] found id: ""
	I1105 19:14:16.264348   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.264360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:16.264368   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:16.264429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:16.298197   74485 cri.go:89] found id: ""
	I1105 19:14:16.298231   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.298243   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:16.298251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:16.298316   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:16.333149   74485 cri.go:89] found id: ""
	I1105 19:14:16.333180   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.333193   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:16.333203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:16.333268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:16.366863   74485 cri.go:89] found id: ""
	I1105 19:14:16.366887   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.366895   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:16.366900   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:16.366947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:16.400434   74485 cri.go:89] found id: ""
	I1105 19:14:16.400458   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.400466   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:16.400472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:16.400524   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:16.435475   74485 cri.go:89] found id: ""
	I1105 19:14:16.435497   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.435504   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:16.435510   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:16.435560   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:16.470577   74485 cri.go:89] found id: ""
	I1105 19:14:16.470604   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.470612   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:16.470620   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:16.470632   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:16.483061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:16.483094   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:16.550662   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:16.550690   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:16.550702   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:16.629372   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:16.629411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:16.669488   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:16.669526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:12.961576   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.461132   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.461748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.224068   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.225065   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:16.347305   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:18.847161   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.219244   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:19.232682   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:19.232744   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:19.264594   74485 cri.go:89] found id: ""
	I1105 19:14:19.264624   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.264635   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:19.264649   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:19.264708   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:19.301434   74485 cri.go:89] found id: ""
	I1105 19:14:19.301468   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.301479   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:19.301487   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:19.301558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:19.333465   74485 cri.go:89] found id: ""
	I1105 19:14:19.333494   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.333502   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:19.333508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:19.333558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:19.365865   74485 cri.go:89] found id: ""
	I1105 19:14:19.365892   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.365900   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:19.365906   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:19.365958   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:19.406533   74485 cri.go:89] found id: ""
	I1105 19:14:19.406563   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.406575   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:19.406583   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:19.406639   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:19.439351   74485 cri.go:89] found id: ""
	I1105 19:14:19.439377   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.439386   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:19.439392   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:19.439438   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:19.475033   74485 cri.go:89] found id: ""
	I1105 19:14:19.475058   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.475065   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:19.475070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:19.475119   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:19.508638   74485 cri.go:89] found id: ""
	I1105 19:14:19.508662   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.508670   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:19.508678   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:19.508689   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:19.588268   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:19.588293   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:19.588304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:19.671382   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:19.671415   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:19.716497   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:19.716526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:19.769686   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:19.769722   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.283476   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:22.296393   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:22.296456   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:22.331226   74485 cri.go:89] found id: ""
	I1105 19:14:22.331247   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.331255   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:22.331261   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:22.331306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:22.363466   74485 cri.go:89] found id: ""
	I1105 19:14:22.363499   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.363510   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:22.363518   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:22.363586   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:22.397025   74485 cri.go:89] found id: ""
	I1105 19:14:22.397052   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.397061   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:22.397066   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:22.397116   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:22.429450   74485 cri.go:89] found id: ""
	I1105 19:14:22.429476   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.429486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:22.429493   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:22.429554   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:22.461615   74485 cri.go:89] found id: ""
	I1105 19:14:22.461643   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.461654   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:22.461660   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:22.461728   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:22.492470   74485 cri.go:89] found id: ""
	I1105 19:14:22.492502   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.492513   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:22.492521   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:22.492587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:22.525335   74485 cri.go:89] found id: ""
	I1105 19:14:22.525358   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.525366   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:22.525372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:22.525423   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:22.558854   74485 cri.go:89] found id: ""
	I1105 19:14:22.558881   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.558890   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:22.558901   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:22.558916   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:22.608638   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:22.608674   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.621769   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:22.621800   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:14:19.461812   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.960286   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.724482   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:22.224505   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:24.225072   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.347018   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:23.347099   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	W1105 19:14:22.688971   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:22.688998   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:22.689012   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:22.770517   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:22.770558   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:25.315778   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:25.335372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:25.335444   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:25.383988   74485 cri.go:89] found id: ""
	I1105 19:14:25.384019   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.384029   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:25.384036   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:25.384096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:25.432070   74485 cri.go:89] found id: ""
	I1105 19:14:25.432103   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.432115   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:25.432122   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:25.432184   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:25.464859   74485 cri.go:89] found id: ""
	I1105 19:14:25.464891   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.464902   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:25.464909   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:25.464976   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:25.498684   74485 cri.go:89] found id: ""
	I1105 19:14:25.498712   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.498719   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:25.498724   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:25.498777   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:25.532998   74485 cri.go:89] found id: ""
	I1105 19:14:25.533023   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.533032   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:25.533039   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:25.533084   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:25.568101   74485 cri.go:89] found id: ""
	I1105 19:14:25.568130   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.568138   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:25.568144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:25.568208   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:25.600470   74485 cri.go:89] found id: ""
	I1105 19:14:25.600495   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.600503   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:25.600509   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:25.600564   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:25.631792   74485 cri.go:89] found id: ""
	I1105 19:14:25.631824   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.631834   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:25.631845   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:25.631860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:25.683820   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:25.683856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:25.698066   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:25.698095   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:25.764838   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:25.764869   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:25.764886   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:25.838791   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:25.838828   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:23.966002   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.460153   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.724324   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:29.223490   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:25.847528   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.346739   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.376183   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:28.389686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:28.389760   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:28.424180   74485 cri.go:89] found id: ""
	I1105 19:14:28.424209   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.424221   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:28.424229   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:28.424289   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:28.462742   74485 cri.go:89] found id: ""
	I1105 19:14:28.462765   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.462777   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:28.462784   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:28.462839   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:28.494550   74485 cri.go:89] found id: ""
	I1105 19:14:28.494574   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.494581   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:28.494588   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:28.494667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:28.525606   74485 cri.go:89] found id: ""
	I1105 19:14:28.525632   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.525639   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:28.525645   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:28.525696   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:28.558599   74485 cri.go:89] found id: ""
	I1105 19:14:28.558628   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.558638   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:28.558644   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:28.558701   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:28.590496   74485 cri.go:89] found id: ""
	I1105 19:14:28.590522   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.590530   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:28.590535   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:28.590599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:28.622748   74485 cri.go:89] found id: ""
	I1105 19:14:28.622772   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.622780   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:28.622786   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:28.622836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:28.656452   74485 cri.go:89] found id: ""
	I1105 19:14:28.656477   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.656485   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:28.656493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:28.656504   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.736458   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:28.736505   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:28.771923   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:28.771954   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:28.821099   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:28.821133   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:28.834698   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:28.834726   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:28.900543   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.400733   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:31.414573   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:31.414647   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:31.452244   74485 cri.go:89] found id: ""
	I1105 19:14:31.452275   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.452286   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:31.452293   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:31.452353   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:31.485898   74485 cri.go:89] found id: ""
	I1105 19:14:31.485920   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.485935   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:31.485940   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:31.486009   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:31.522826   74485 cri.go:89] found id: ""
	I1105 19:14:31.522850   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.522858   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:31.522865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:31.522925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:31.560096   74485 cri.go:89] found id: ""
	I1105 19:14:31.560136   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.560164   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:31.560174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:31.560234   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:31.596698   74485 cri.go:89] found id: ""
	I1105 19:14:31.596725   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.596733   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:31.596738   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:31.596792   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:31.635109   74485 cri.go:89] found id: ""
	I1105 19:14:31.635138   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.635148   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:31.635156   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:31.635221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:31.667612   74485 cri.go:89] found id: ""
	I1105 19:14:31.667639   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.667651   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:31.667658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:31.667726   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:31.699815   74485 cri.go:89] found id: ""
	I1105 19:14:31.699844   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.699854   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:31.699864   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:31.699879   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:31.737165   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:31.737196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:31.788513   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:31.788550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:31.801580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:31.801609   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:31.871658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.871683   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:31.871696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.462108   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.961875   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:31.223977   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:33.724027   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.847090   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:32.847233   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.847857   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.450954   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:34.466129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:34.466204   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:34.499984   74485 cri.go:89] found id: ""
	I1105 19:14:34.500009   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.500020   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:34.500027   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:34.500091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:34.532923   74485 cri.go:89] found id: ""
	I1105 19:14:34.532950   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.532958   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:34.532969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:34.533017   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:34.566772   74485 cri.go:89] found id: ""
	I1105 19:14:34.566803   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.566811   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:34.566817   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:34.566872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:34.607398   74485 cri.go:89] found id: ""
	I1105 19:14:34.607422   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.607430   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:34.607435   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:34.607497   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:34.640091   74485 cri.go:89] found id: ""
	I1105 19:14:34.640123   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.640135   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:34.640143   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:34.640207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:34.677164   74485 cri.go:89] found id: ""
	I1105 19:14:34.677201   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.677211   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:34.677217   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:34.677266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:34.714900   74485 cri.go:89] found id: ""
	I1105 19:14:34.714931   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.714942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:34.714949   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:34.715023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:34.751003   74485 cri.go:89] found id: ""
	I1105 19:14:34.751032   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.751040   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:34.751048   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:34.751059   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:34.822279   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:34.822301   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:34.822315   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:34.898607   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:34.898640   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:34.934727   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:34.934754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:34.985935   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:34.985969   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.500117   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:37.512467   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:37.512541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:37.544914   74485 cri.go:89] found id: ""
	I1105 19:14:37.544941   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.544952   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:37.544959   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:37.545028   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:37.581507   74485 cri.go:89] found id: ""
	I1105 19:14:37.581535   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.581545   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:37.581553   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:37.581612   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:37.615546   74485 cri.go:89] found id: ""
	I1105 19:14:37.615576   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.615585   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:37.615592   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:37.615667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:37.648239   74485 cri.go:89] found id: ""
	I1105 19:14:37.648267   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.648276   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:37.648283   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:37.648343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:33.460860   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:35.461416   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:36.224852   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:38.725488   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.347563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:39.347732   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.682861   74485 cri.go:89] found id: ""
	I1105 19:14:37.682891   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.682898   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:37.682904   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:37.682952   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:37.715506   74485 cri.go:89] found id: ""
	I1105 19:14:37.715532   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.715540   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:37.715547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:37.715597   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:37.747973   74485 cri.go:89] found id: ""
	I1105 19:14:37.748003   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.748014   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:37.748022   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:37.748083   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:37.780270   74485 cri.go:89] found id: ""
	I1105 19:14:37.780294   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.780302   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:37.780310   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:37.780321   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.793885   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:37.793914   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:37.860114   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:37.860140   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:37.860154   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:37.941221   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:37.941255   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.980537   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:37.980567   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.532301   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:40.545540   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:40.545599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:40.578642   74485 cri.go:89] found id: ""
	I1105 19:14:40.578687   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.578699   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:40.578707   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:40.578772   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:40.612049   74485 cri.go:89] found id: ""
	I1105 19:14:40.612078   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.612089   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:40.612097   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:40.612159   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:40.644495   74485 cri.go:89] found id: ""
	I1105 19:14:40.644519   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.644527   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:40.644532   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:40.644587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:40.676890   74485 cri.go:89] found id: ""
	I1105 19:14:40.676923   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.676931   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:40.676937   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:40.676984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:40.710095   74485 cri.go:89] found id: ""
	I1105 19:14:40.710125   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.710136   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:40.710144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:40.710200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:40.748323   74485 cri.go:89] found id: ""
	I1105 19:14:40.748353   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.748364   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:40.748372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:40.748501   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:40.781578   74485 cri.go:89] found id: ""
	I1105 19:14:40.781606   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.781618   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:40.781626   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:40.781689   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:40.816010   74485 cri.go:89] found id: ""
	I1105 19:14:40.816048   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.816060   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:40.816071   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:40.816086   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.869836   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:40.869876   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:40.883436   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:40.883471   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:40.946538   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:40.946566   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:40.946585   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:41.023085   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:41.023123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.962163   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.461278   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.726894   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.224939   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:41.847053   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:44.346789   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.566841   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:43.579425   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:43.579498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:43.620500   74485 cri.go:89] found id: ""
	I1105 19:14:43.620526   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.620535   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:43.620541   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:43.620600   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:43.652992   74485 cri.go:89] found id: ""
	I1105 19:14:43.653024   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.653035   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:43.653042   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:43.653105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:43.686945   74485 cri.go:89] found id: ""
	I1105 19:14:43.686991   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.687003   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:43.687010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:43.687124   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:43.720075   74485 cri.go:89] found id: ""
	I1105 19:14:43.720103   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.720114   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:43.720121   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:43.720179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:43.757969   74485 cri.go:89] found id: ""
	I1105 19:14:43.757997   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.758005   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:43.758011   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:43.758071   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:43.790068   74485 cri.go:89] found id: ""
	I1105 19:14:43.790094   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.790103   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:43.790109   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:43.790153   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:43.821696   74485 cri.go:89] found id: ""
	I1105 19:14:43.821722   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.821733   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:43.821741   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:43.821803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:43.855976   74485 cri.go:89] found id: ""
	I1105 19:14:43.856003   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.856011   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:43.856019   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:43.856029   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:43.934375   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:43.934409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:43.972567   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:43.972597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:44.025660   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:44.025696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:44.039229   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:44.039258   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:44.112179   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:46.612815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:46.626070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:46.626145   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:46.659184   74485 cri.go:89] found id: ""
	I1105 19:14:46.659210   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.659218   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:46.659227   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:46.659288   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:46.691887   74485 cri.go:89] found id: ""
	I1105 19:14:46.691917   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.691928   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:46.691934   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:46.692003   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:46.725745   74485 cri.go:89] found id: ""
	I1105 19:14:46.725776   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.725787   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:46.725795   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:46.725847   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:46.761733   74485 cri.go:89] found id: ""
	I1105 19:14:46.761762   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.761773   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:46.761780   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:46.761842   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:46.792926   74485 cri.go:89] found id: ""
	I1105 19:14:46.792955   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.792966   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:46.792974   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:46.793036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:46.824462   74485 cri.go:89] found id: ""
	I1105 19:14:46.824503   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.824512   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:46.824519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:46.824580   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:46.865057   74485 cri.go:89] found id: ""
	I1105 19:14:46.865082   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.865090   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:46.865095   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:46.865146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:46.901357   74485 cri.go:89] found id: ""
	I1105 19:14:46.901385   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.901393   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:46.901401   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:46.901414   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:46.951986   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:46.952021   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:46.966035   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:46.966065   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:47.035163   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:47.035184   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:47.035196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:47.115825   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:47.115860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:42.961397   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.460846   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.724189   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.724319   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:46.847553   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.346787   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.658737   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:49.672088   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:49.672182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:49.708638   74485 cri.go:89] found id: ""
	I1105 19:14:49.708666   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.708674   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:49.708679   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:49.708736   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:49.744485   74485 cri.go:89] found id: ""
	I1105 19:14:49.744513   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.744521   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:49.744526   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:49.744572   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:49.779758   74485 cri.go:89] found id: ""
	I1105 19:14:49.779785   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.779794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:49.779800   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:49.779858   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:49.814216   74485 cri.go:89] found id: ""
	I1105 19:14:49.814248   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.814256   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:49.814262   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:49.814310   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:49.851348   74485 cri.go:89] found id: ""
	I1105 19:14:49.851377   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.851389   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:49.851396   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:49.851455   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:49.883866   74485 cri.go:89] found id: ""
	I1105 19:14:49.883897   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.883906   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:49.883912   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:49.883959   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:49.916944   74485 cri.go:89] found id: ""
	I1105 19:14:49.916967   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.916975   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:49.916980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:49.917039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:49.950405   74485 cri.go:89] found id: ""
	I1105 19:14:49.950437   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.950449   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:49.950459   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:49.950475   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:49.996064   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:49.996102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:50.044865   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:50.044902   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:50.058206   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:50.058236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:50.130371   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:50.130397   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:50.130412   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:49.960550   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.961271   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.724896   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.224128   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.346823   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:53.847102   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.706441   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:52.719571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:52.719655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:52.753850   74485 cri.go:89] found id: ""
	I1105 19:14:52.753880   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.753891   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:52.753899   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:52.753961   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:52.794112   74485 cri.go:89] found id: ""
	I1105 19:14:52.794139   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.794149   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:52.794156   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:52.794218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:52.830151   74485 cri.go:89] found id: ""
	I1105 19:14:52.830178   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.830188   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:52.830195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:52.830258   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:52.864803   74485 cri.go:89] found id: ""
	I1105 19:14:52.864832   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.864853   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:52.864868   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:52.864930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:52.897237   74485 cri.go:89] found id: ""
	I1105 19:14:52.897271   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.897282   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:52.897289   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:52.897351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:52.932236   74485 cri.go:89] found id: ""
	I1105 19:14:52.932262   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.932270   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:52.932275   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:52.932319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:52.965781   74485 cri.go:89] found id: ""
	I1105 19:14:52.965808   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.965817   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:52.965825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:52.965918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:52.999098   74485 cri.go:89] found id: ""
	I1105 19:14:52.999121   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.999129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:52.999137   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:52.999146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:53.051085   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:53.051127   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:53.064690   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:53.064717   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:53.128334   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:53.128358   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:53.128372   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:53.207751   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:53.207791   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:55.745430   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:55.758734   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:55.758821   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:55.791827   74485 cri.go:89] found id: ""
	I1105 19:14:55.791854   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.791862   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:55.791868   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:55.791922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:55.824191   74485 cri.go:89] found id: ""
	I1105 19:14:55.824217   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.824224   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:55.824230   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:55.824278   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:55.858579   74485 cri.go:89] found id: ""
	I1105 19:14:55.858611   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.858619   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:55.858625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:55.858673   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:55.891579   74485 cri.go:89] found id: ""
	I1105 19:14:55.891604   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.891612   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:55.891617   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:55.891663   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:55.924881   74485 cri.go:89] found id: ""
	I1105 19:14:55.924910   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.924920   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:55.924930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:55.924999   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:55.956634   74485 cri.go:89] found id: ""
	I1105 19:14:55.956663   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.956678   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:55.956686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:55.956742   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:55.988770   74485 cri.go:89] found id: ""
	I1105 19:14:55.988803   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.988814   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:55.988821   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:55.988880   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:56.022236   74485 cri.go:89] found id: ""
	I1105 19:14:56.022257   74485 logs.go:282] 0 containers: []
	W1105 19:14:56.022266   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:56.022273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:56.022284   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:56.073035   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:56.073071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:56.086899   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:56.086923   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:56.158219   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:56.158247   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:56.158259   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:56.246621   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:56.246660   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:53.962537   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.461516   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:54.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.725381   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:59.223995   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:55.847591   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.346027   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:00.349718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.791443   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:58.804398   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:58.804476   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:58.837812   74485 cri.go:89] found id: ""
	I1105 19:14:58.837840   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.837856   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:58.837863   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:58.837926   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:58.870154   74485 cri.go:89] found id: ""
	I1105 19:14:58.870186   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.870197   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:58.870204   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:58.870268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:58.906518   74485 cri.go:89] found id: ""
	I1105 19:14:58.906545   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.906553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:58.906563   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:58.906614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:58.939320   74485 cri.go:89] found id: ""
	I1105 19:14:58.939346   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.939357   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:58.939364   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:58.939426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:58.974116   74485 cri.go:89] found id: ""
	I1105 19:14:58.974143   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.974153   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:58.974160   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:58.974221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:59.006820   74485 cri.go:89] found id: ""
	I1105 19:14:59.006854   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.006866   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:59.006873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:59.006933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:59.039691   74485 cri.go:89] found id: ""
	I1105 19:14:59.039723   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.039735   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:59.039742   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:59.039800   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:59.071829   74485 cri.go:89] found id: ""
	I1105 19:14:59.071860   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.071881   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:59.071893   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:59.071906   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:59.124158   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:59.124195   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:59.138563   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:59.138594   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:59.216148   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:59.216174   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:59.216189   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:59.295262   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:59.295297   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:01.833789   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:01.847332   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:01.847408   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:01.882721   74485 cri.go:89] found id: ""
	I1105 19:15:01.882743   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.882750   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:01.882755   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:01.882811   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:01.916457   74485 cri.go:89] found id: ""
	I1105 19:15:01.916479   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.916487   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:01.916502   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:01.916557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:01.950521   74485 cri.go:89] found id: ""
	I1105 19:15:01.950552   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.950564   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:01.950571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:01.950624   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:01.985823   74485 cri.go:89] found id: ""
	I1105 19:15:01.985852   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.985862   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:01.985870   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:01.985918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:02.021689   74485 cri.go:89] found id: ""
	I1105 19:15:02.021720   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.021731   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:02.021739   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:02.021804   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:02.058632   74485 cri.go:89] found id: ""
	I1105 19:15:02.058658   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.058666   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:02.058672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:02.058738   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:02.097916   74485 cri.go:89] found id: ""
	I1105 19:15:02.097947   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.097956   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:02.097961   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:02.098010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:02.131992   74485 cri.go:89] found id: ""
	I1105 19:15:02.132027   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.132038   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:02.132050   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:02.132066   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:02.188605   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:02.188645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:02.201873   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:02.201904   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:02.274767   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:02.274795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:02.274811   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:02.358520   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:02.358559   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:58.962072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.461009   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.224719   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:03.724333   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:02.847593   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.348665   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:04.897693   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:04.913131   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:04.913207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:04.952546   74485 cri.go:89] found id: ""
	I1105 19:15:04.952571   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.952579   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:04.952584   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:04.952643   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:04.987334   74485 cri.go:89] found id: ""
	I1105 19:15:04.987360   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.987368   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:04.987374   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:04.987434   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:05.021873   74485 cri.go:89] found id: ""
	I1105 19:15:05.021906   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.021919   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:05.021926   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:05.021985   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:05.056169   74485 cri.go:89] found id: ""
	I1105 19:15:05.056199   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.056208   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:05.056213   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:05.056265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:05.093090   74485 cri.go:89] found id: ""
	I1105 19:15:05.093117   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.093125   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:05.093130   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:05.093182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:05.127533   74485 cri.go:89] found id: ""
	I1105 19:15:05.127557   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.127564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:05.127576   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:05.127625   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:05.165127   74485 cri.go:89] found id: ""
	I1105 19:15:05.165162   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.165173   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:05.165180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:05.165243   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:05.200526   74485 cri.go:89] found id: ""
	I1105 19:15:05.200556   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.200567   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:05.200578   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:05.200593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:05.247497   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:05.247535   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:05.261963   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:05.261996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:05.336813   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:05.336833   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:05.336844   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:05.412278   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:05.412320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:03.461266   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.463142   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.728530   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:08.227700   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.848748   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:10.346754   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.951085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:07.966125   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:07.966203   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:08.004253   74485 cri.go:89] found id: ""
	I1105 19:15:08.004291   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.004302   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:08.004310   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:08.004373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:08.039539   74485 cri.go:89] found id: ""
	I1105 19:15:08.039562   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.039569   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:08.039575   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:08.039629   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:08.076043   74485 cri.go:89] found id: ""
	I1105 19:15:08.076080   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.076093   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:08.076101   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:08.076157   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:08.110489   74485 cri.go:89] found id: ""
	I1105 19:15:08.110512   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.110519   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:08.110525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:08.110589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:08.147532   74485 cri.go:89] found id: ""
	I1105 19:15:08.147564   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.147574   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:08.147580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:08.147628   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:08.182225   74485 cri.go:89] found id: ""
	I1105 19:15:08.182248   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.182256   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:08.182263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:08.182322   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:08.223488   74485 cri.go:89] found id: ""
	I1105 19:15:08.223524   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.223536   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:08.223544   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:08.223610   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:08.266524   74485 cri.go:89] found id: ""
	I1105 19:15:08.266559   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.266571   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:08.266582   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:08.266597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:08.279036   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:08.279061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:08.346030   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:08.346052   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:08.346064   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:08.428081   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:08.428118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:08.464760   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:08.464789   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.016193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:11.030598   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:11.030681   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:11.066035   74485 cri.go:89] found id: ""
	I1105 19:15:11.066064   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.066073   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:11.066078   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:11.066133   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:11.103906   74485 cri.go:89] found id: ""
	I1105 19:15:11.103937   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.103948   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:11.103955   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:11.104023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:11.142936   74485 cri.go:89] found id: ""
	I1105 19:15:11.143024   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.143034   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:11.143041   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:11.143091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:11.180041   74485 cri.go:89] found id: ""
	I1105 19:15:11.180074   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.180086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:11.180094   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:11.180158   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:11.215661   74485 cri.go:89] found id: ""
	I1105 19:15:11.215693   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.215701   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:11.215707   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:11.215758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:11.252603   74485 cri.go:89] found id: ""
	I1105 19:15:11.252651   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.252663   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:11.252672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:11.252739   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:11.299295   74485 cri.go:89] found id: ""
	I1105 19:15:11.299328   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.299340   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:11.299347   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:11.299402   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:11.355153   74485 cri.go:89] found id: ""
	I1105 19:15:11.355177   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.355185   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:11.355193   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:11.355206   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:11.441076   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:11.441118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:11.480367   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:11.480396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.534646   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:11.534683   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:11.548141   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:11.548170   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:11.616452   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:07.961073   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:09.962118   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.455874   73732 pod_ready.go:82] duration metric: took 4m0.000853559s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:12.455911   73732 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:15:12.455936   73732 pod_ready.go:39] duration metric: took 4m14.55377544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:12.455984   73732 kubeadm.go:597] duration metric: took 4m23.030552871s to restartPrimaryControlPlane
	W1105 19:15:12.456078   73732 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:12.456111   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:10.724247   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.725886   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.846646   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.848074   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.117448   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:14.131224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:14.131297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:14.167811   74485 cri.go:89] found id: ""
	I1105 19:15:14.167843   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.167855   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:14.167862   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:14.167921   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:14.204128   74485 cri.go:89] found id: ""
	I1105 19:15:14.204156   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.204164   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:14.204169   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:14.204232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:14.240687   74485 cri.go:89] found id: ""
	I1105 19:15:14.240716   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.240727   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:14.240735   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:14.240788   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:14.274204   74485 cri.go:89] found id: ""
	I1105 19:15:14.274231   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.274242   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:14.274249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:14.274307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:14.312090   74485 cri.go:89] found id: ""
	I1105 19:15:14.312119   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.312130   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:14.312139   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:14.312200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:14.346824   74485 cri.go:89] found id: ""
	I1105 19:15:14.346857   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.346868   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:14.346875   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:14.346934   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:14.380634   74485 cri.go:89] found id: ""
	I1105 19:15:14.380668   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.380679   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:14.380686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:14.380746   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:14.414402   74485 cri.go:89] found id: ""
	I1105 19:15:14.414432   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.414441   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:14.414449   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:14.414459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:14.464542   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:14.464581   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:14.478195   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:14.478225   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:14.553670   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:14.553693   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:14.553708   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:14.634619   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:14.634659   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.174085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:17.191712   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:17.191771   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:17.234101   74485 cri.go:89] found id: ""
	I1105 19:15:17.234132   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.234143   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:17.234149   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:17.234213   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:17.281548   74485 cri.go:89] found id: ""
	I1105 19:15:17.281574   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.281581   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:17.281588   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:17.281655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:17.337698   74485 cri.go:89] found id: ""
	I1105 19:15:17.337727   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.337735   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:17.337743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:17.337790   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:17.371756   74485 cri.go:89] found id: ""
	I1105 19:15:17.371782   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.371790   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:17.371796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:17.371854   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:17.404989   74485 cri.go:89] found id: ""
	I1105 19:15:17.405015   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.405026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:17.405033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:17.405096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:17.438613   74485 cri.go:89] found id: ""
	I1105 19:15:17.438637   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.438648   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:17.438656   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:17.438717   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:17.470465   74485 cri.go:89] found id: ""
	I1105 19:15:17.470494   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.470502   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:17.470508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:17.470558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:17.503835   74485 cri.go:89] found id: ""
	I1105 19:15:17.503867   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.503876   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:17.503884   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:17.503896   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:17.584110   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:17.584146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.626928   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:17.626955   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:15.223749   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.225434   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.347847   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:19.847047   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.679356   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:17.679397   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:17.693476   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:17.693506   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:17.766809   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.266926   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:20.282219   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:20.282293   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:20.322133   74485 cri.go:89] found id: ""
	I1105 19:15:20.322163   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.322171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:20.322178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:20.322248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:20.357030   74485 cri.go:89] found id: ""
	I1105 19:15:20.357072   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.357084   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:20.357091   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:20.357156   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:20.390523   74485 cri.go:89] found id: ""
	I1105 19:15:20.390549   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.390559   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:20.390567   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:20.390631   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:20.425807   74485 cri.go:89] found id: ""
	I1105 19:15:20.425830   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.425837   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:20.425843   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:20.425903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:20.461984   74485 cri.go:89] found id: ""
	I1105 19:15:20.462014   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.462026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:20.462033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:20.462094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:20.495689   74485 cri.go:89] found id: ""
	I1105 19:15:20.495725   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.495739   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:20.495746   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:20.495799   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:20.528666   74485 cri.go:89] found id: ""
	I1105 19:15:20.528701   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.528713   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:20.528721   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:20.528783   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:20.562566   74485 cri.go:89] found id: ""
	I1105 19:15:20.562596   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.562606   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:20.562614   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:20.562624   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:20.610961   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:20.611000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:20.623898   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:20.623928   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:20.696412   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.696440   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:20.696456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:20.779601   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:20.779642   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:19.725198   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.224019   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.225286   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.347992   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.846718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:23.319846   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:23.333278   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:23.333357   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:23.370771   74485 cri.go:89] found id: ""
	I1105 19:15:23.370796   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.370805   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:23.370810   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:23.370872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:23.405994   74485 cri.go:89] found id: ""
	I1105 19:15:23.406021   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.406029   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:23.406034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:23.406092   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:23.443729   74485 cri.go:89] found id: ""
	I1105 19:15:23.443757   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.443767   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:23.443774   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:23.443836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:23.476162   74485 cri.go:89] found id: ""
	I1105 19:15:23.476188   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.476197   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:23.476205   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:23.476266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:23.509325   74485 cri.go:89] found id: ""
	I1105 19:15:23.509353   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.509363   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:23.509371   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:23.509427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:23.541880   74485 cri.go:89] found id: ""
	I1105 19:15:23.541912   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.541922   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:23.541929   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:23.541993   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:23.574204   74485 cri.go:89] found id: ""
	I1105 19:15:23.574236   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.574248   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:23.574256   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:23.574323   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:23.606865   74485 cri.go:89] found id: ""
	I1105 19:15:23.606896   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.606908   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:23.606918   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:23.606932   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:23.673771   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:23.673792   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:23.673803   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:23.753298   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:23.753335   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:23.792273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:23.792304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:23.843072   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:23.843110   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.356859   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:26.369417   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:26.369488   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:26.403611   74485 cri.go:89] found id: ""
	I1105 19:15:26.403639   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.403647   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:26.403653   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:26.403725   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:26.439891   74485 cri.go:89] found id: ""
	I1105 19:15:26.439924   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.439936   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:26.439943   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:26.439991   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:26.473502   74485 cri.go:89] found id: ""
	I1105 19:15:26.473542   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.473554   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:26.473561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:26.473640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:26.505666   74485 cri.go:89] found id: ""
	I1105 19:15:26.505695   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.505703   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:26.505710   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:26.505769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:26.539781   74485 cri.go:89] found id: ""
	I1105 19:15:26.539815   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.539827   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:26.539835   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:26.539911   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:26.574673   74485 cri.go:89] found id: ""
	I1105 19:15:26.574712   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.574721   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:26.574727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:26.574773   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:26.608410   74485 cri.go:89] found id: ""
	I1105 19:15:26.608433   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.608441   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:26.608446   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:26.608494   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:26.644036   74485 cri.go:89] found id: ""
	I1105 19:15:26.644065   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.644076   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:26.644087   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:26.644098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.718901   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:26.718937   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:26.758920   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:26.758953   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:26.811241   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:26.811277   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.824931   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:26.824961   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:26.891799   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:26.725062   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:27.724594   74141 pod_ready.go:82] duration metric: took 4m0.006622979s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:27.724627   74141 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1105 19:15:27.724644   74141 pod_ready.go:39] duration metric: took 4m0.807889519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:27.724663   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:15:27.724711   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:27.724769   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:27.771870   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:27.771897   74141 cri.go:89] found id: ""
	I1105 19:15:27.771906   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:27.771966   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.776484   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:27.776553   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:27.823529   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:27.823564   74141 cri.go:89] found id: ""
	I1105 19:15:27.823576   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:27.823638   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.828600   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:27.828685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:27.878206   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:27.878242   74141 cri.go:89] found id: ""
	I1105 19:15:27.878254   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:27.878317   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.882545   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:27.882640   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:27.920102   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:27.920127   74141 cri.go:89] found id: ""
	I1105 19:15:27.920137   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:27.920189   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.924516   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:27.924593   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:27.969493   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:27.969523   74141 cri.go:89] found id: ""
	I1105 19:15:27.969534   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:27.969589   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.973637   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:27.973724   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:28.014369   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.014396   74141 cri.go:89] found id: ""
	I1105 19:15:28.014405   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:28.014463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.019040   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:28.019112   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:28.056411   74141 cri.go:89] found id: ""
	I1105 19:15:28.056438   74141 logs.go:282] 0 containers: []
	W1105 19:15:28.056446   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:28.056452   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:28.056502   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:28.099541   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.099562   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.099566   74141 cri.go:89] found id: ""
	I1105 19:15:28.099573   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:28.099628   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.104144   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.108443   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:28.108465   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.153262   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:28.153302   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.197210   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:28.197237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:28.242915   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:28.242944   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:28.257468   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:28.257497   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:28.299795   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:28.299825   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:28.333983   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:28.334015   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:28.369174   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:28.369202   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:28.405838   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:28.405869   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:28.477842   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:28.477880   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:28.595832   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:28.595865   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:28.639146   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:28.639179   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.689519   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:28.689554   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.846977   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:28.847878   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:29.392417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:29.405249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:29.405331   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:29.437397   74485 cri.go:89] found id: ""
	I1105 19:15:29.437432   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.437443   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:29.437450   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:29.437504   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:29.469908   74485 cri.go:89] found id: ""
	I1105 19:15:29.469938   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.469946   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:29.469951   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:29.470008   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:29.502302   74485 cri.go:89] found id: ""
	I1105 19:15:29.502331   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.502339   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:29.502345   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:29.502391   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:29.534285   74485 cri.go:89] found id: ""
	I1105 19:15:29.534309   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.534317   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:29.534322   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:29.534373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:29.571918   74485 cri.go:89] found id: ""
	I1105 19:15:29.571962   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.571973   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:29.571983   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:29.572042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:29.605324   74485 cri.go:89] found id: ""
	I1105 19:15:29.605354   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.605365   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:29.605373   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:29.605435   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:29.640181   74485 cri.go:89] found id: ""
	I1105 19:15:29.640210   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.640218   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:29.640224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:29.640273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:29.671121   74485 cri.go:89] found id: ""
	I1105 19:15:29.671147   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.671155   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:29.671164   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:29.671174   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:29.750821   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:29.750856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:29.787452   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:29.787479   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:29.840413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:29.840459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:29.855540   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:29.855580   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:29.925849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:32.426016   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:32.438759   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:32.438824   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:32.476376   74485 cri.go:89] found id: ""
	I1105 19:15:32.476406   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.476416   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:32.476423   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:32.476490   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:32.512328   74485 cri.go:89] found id: ""
	I1105 19:15:32.512352   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.512360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:32.512365   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:32.512414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:32.546803   74485 cri.go:89] found id: ""
	I1105 19:15:32.546833   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.546844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:32.546851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:32.546925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:32.585904   74485 cri.go:89] found id: ""
	I1105 19:15:32.585934   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.585946   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:32.585953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:32.586014   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:32.620976   74485 cri.go:89] found id: ""
	I1105 19:15:32.621005   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.621012   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:32.621018   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:32.621082   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.668028   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:31.684024   74141 api_server.go:72] duration metric: took 4m12.496021782s to wait for apiserver process to appear ...
	I1105 19:15:31.684060   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:15:31.684105   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:31.684163   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:31.719462   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:31.719496   74141 cri.go:89] found id: ""
	I1105 19:15:31.719506   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:31.719559   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.723632   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:31.723702   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:31.761976   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:31.762001   74141 cri.go:89] found id: ""
	I1105 19:15:31.762010   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:31.762067   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.766066   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:31.766137   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:31.799673   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:31.799694   74141 cri.go:89] found id: ""
	I1105 19:15:31.799701   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:31.799753   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.803632   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:31.803714   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:31.841782   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:31.841808   74141 cri.go:89] found id: ""
	I1105 19:15:31.841818   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:31.841873   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.850409   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:31.850471   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:31.891932   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:31.891959   74141 cri.go:89] found id: ""
	I1105 19:15:31.891969   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:31.892026   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.896065   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:31.896125   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.932759   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:31.932781   74141 cri.go:89] found id: ""
	I1105 19:15:31.932788   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:31.932831   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.936611   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:31.936685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:31.971296   74141 cri.go:89] found id: ""
	I1105 19:15:31.971328   74141 logs.go:282] 0 containers: []
	W1105 19:15:31.971339   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:31.971348   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:31.971410   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:32.006153   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:32.006173   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.006177   74141 cri.go:89] found id: ""
	I1105 19:15:32.006184   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:32.006226   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.010159   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.013807   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.013831   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.084222   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:32.084273   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:32.127875   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:32.127928   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:32.173008   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:32.173041   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:32.235366   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.235402   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.714822   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:32.714861   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.750733   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.750761   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.796233   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.796264   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.809269   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.809296   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:32.931162   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:32.931196   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:32.968551   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:32.968578   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:33.008115   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:33.008152   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:33.046201   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:33.046237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:31.346652   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:33.347118   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:32.658958   74485 cri.go:89] found id: ""
	I1105 19:15:32.659006   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.659018   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:32.659026   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:32.659091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:32.694317   74485 cri.go:89] found id: ""
	I1105 19:15:32.694341   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.694349   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:32.694354   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:32.694403   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:32.728277   74485 cri.go:89] found id: ""
	I1105 19:15:32.728314   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.728327   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:32.728338   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.728352   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.815579   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.815615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.856776   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.856807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.909477   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.909518   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.923789   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.923817   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:32.997898   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:35.498040   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:35.511537   74485 kubeadm.go:597] duration metric: took 4m4.46832509s to restartPrimaryControlPlane
	W1105 19:15:35.511612   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:35.511644   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:35.586678   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:15:35.591512   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:15:35.592489   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:15:35.592507   74141 api_server.go:131] duration metric: took 3.908440367s to wait for apiserver health ...
	I1105 19:15:35.592514   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:15:35.592538   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:35.592589   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:35.636389   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.636408   74141 cri.go:89] found id: ""
	I1105 19:15:35.636416   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:35.636463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.640778   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:35.640839   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:35.676793   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:35.676818   74141 cri.go:89] found id: ""
	I1105 19:15:35.676828   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:35.676890   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.681596   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:35.681669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:35.721728   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:35.721754   74141 cri.go:89] found id: ""
	I1105 19:15:35.721763   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:35.721808   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.725619   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:35.725677   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:35.765348   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:35.765377   74141 cri.go:89] found id: ""
	I1105 19:15:35.765386   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:35.765439   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.769594   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:35.769669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:35.809427   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:35.809452   74141 cri.go:89] found id: ""
	I1105 19:15:35.809460   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:35.809505   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.814317   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:35.814376   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:35.853861   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:35.853882   74141 cri.go:89] found id: ""
	I1105 19:15:35.853890   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:35.853934   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.857734   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:35.857787   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:35.897791   74141 cri.go:89] found id: ""
	I1105 19:15:35.897816   74141 logs.go:282] 0 containers: []
	W1105 19:15:35.897824   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:35.897830   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:35.897887   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:35.940906   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:35.940940   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:35.940946   74141 cri.go:89] found id: ""
	I1105 19:15:35.940954   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:35.941006   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.945200   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.948860   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:35.948884   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.992660   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:35.992690   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:36.033586   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:36.033617   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:36.066599   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:36.066643   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:36.104895   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:36.104932   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:36.489747   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:36.489781   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:36.531923   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:36.531952   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:36.598718   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:36.598758   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:36.612969   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:36.612998   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:36.718535   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:36.718568   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:36.755636   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:36.755677   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:36.815561   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:36.815640   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:36.850878   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:36.850904   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:39.390699   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:15:39.390733   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.390738   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.390743   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.390747   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.390750   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.390753   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.390760   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.390764   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.390771   74141 system_pods.go:74] duration metric: took 3.798251189s to wait for pod list to return data ...
	I1105 19:15:39.390777   74141 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:15:39.393894   74141 default_sa.go:45] found service account: "default"
	I1105 19:15:39.393914   74141 default_sa.go:55] duration metric: took 3.132788ms for default service account to be created ...
	I1105 19:15:39.393929   74141 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:15:39.398455   74141 system_pods.go:86] 8 kube-system pods found
	I1105 19:15:39.398480   74141 system_pods.go:89] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.398485   74141 system_pods.go:89] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.398490   74141 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.398494   74141 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.398497   74141 system_pods.go:89] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.398501   74141 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.398508   74141 system_pods.go:89] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.398512   74141 system_pods.go:89] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.398520   74141 system_pods.go:126] duration metric: took 4.586494ms to wait for k8s-apps to be running ...
	I1105 19:15:39.398529   74141 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:15:39.398569   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.413878   74141 system_svc.go:56] duration metric: took 15.340417ms WaitForService to wait for kubelet
	I1105 19:15:39.413908   74141 kubeadm.go:582] duration metric: took 4m20.225910976s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:15:39.413936   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:15:39.416851   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:15:39.416870   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:15:39.416880   74141 node_conditions.go:105] duration metric: took 2.939584ms to run NodePressure ...
	I1105 19:15:39.416891   74141 start.go:241] waiting for startup goroutines ...
	I1105 19:15:39.416899   74141 start.go:246] waiting for cluster config update ...
	I1105 19:15:39.416911   74141 start.go:255] writing updated cluster config ...
	I1105 19:15:39.417211   74141 ssh_runner.go:195] Run: rm -f paused
	I1105 19:15:39.463773   74141 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:15:39.465688   74141 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-608095" cluster and "default" namespace by default
	I1105 19:15:39.702249   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.19058336s)
	I1105 19:15:39.702314   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.717966   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:39.728114   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:39.740451   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:39.740476   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:39.740519   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:39.751089   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:39.751150   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:39.761832   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:39.771841   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:39.771904   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:39.782332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.792379   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:39.792438   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.801625   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:39.811691   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:39.811740   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:39.821162   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:39.891377   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:15:39.891443   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:40.034176   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:40.034337   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:40.034476   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:15:40.211588   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:35.847491   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:38.346965   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.348252   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.213724   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:40.213838   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:40.213939   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:40.214048   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:40.214172   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:40.214266   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:40.214375   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:40.214478   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:40.214567   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:40.214687   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:40.214819   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:40.214884   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:40.214980   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:40.358606   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:40.632263   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:40.766570   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:40.885914   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:40.902379   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:40.903647   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:40.903716   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:41.040274   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:41.042093   74485 out.go:235]   - Booting up control plane ...
	I1105 19:15:41.042222   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:41.048448   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:41.058445   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:41.059466   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:41.062648   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:15:38.649673   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193536212s)
	I1105 19:15:38.649753   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:38.665214   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:38.674520   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:38.684078   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:38.684102   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:38.684151   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:38.693169   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:38.693239   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:38.702305   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:38.710796   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:38.710868   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:38.719716   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.728090   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:38.728143   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.737219   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:38.745625   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:38.745692   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:38.754684   73732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:38.914343   73732 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:15:42.847011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:44.851431   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:47.368221   73732 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:15:47.368296   73732 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:47.368405   73732 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:47.368552   73732 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:47.368686   73732 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:15:47.368787   73732 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:47.370333   73732 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:47.370429   73732 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:47.370529   73732 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:47.370650   73732 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:47.370763   73732 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:47.370900   73732 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:47.371009   73732 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:47.371110   73732 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:47.371198   73732 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:47.371312   73732 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:47.371431   73732 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:47.371494   73732 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:47.371573   73732 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:47.371656   73732 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:47.371725   73732 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:15:47.371797   73732 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:47.371893   73732 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:47.371976   73732 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:47.372074   73732 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:47.372160   73732 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:47.374386   73732 out.go:235]   - Booting up control plane ...
	I1105 19:15:47.374503   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:47.374622   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:47.374707   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:47.374838   73732 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:47.374950   73732 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:47.375046   73732 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:47.375226   73732 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:15:47.375367   73732 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:15:47.375450   73732 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.124171ms
	I1105 19:15:47.375549   73732 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:15:47.375647   73732 kubeadm.go:310] [api-check] The API server is healthy after 5.001431223s
	I1105 19:15:47.375804   73732 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:15:47.375968   73732 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:15:47.376055   73732 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:15:47.376321   73732 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-271881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:15:47.376412   73732 kubeadm.go:310] [bootstrap-token] Using token: 2xak8n.owgv6oncwawjarav
	I1105 19:15:47.377766   73732 out.go:235]   - Configuring RBAC rules ...
	I1105 19:15:47.377911   73732 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:15:47.378024   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:15:47.378138   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:15:47.378243   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:15:47.378337   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:15:47.378408   73732 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:15:47.378502   73732 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:15:47.378541   73732 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:15:47.378580   73732 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:15:47.378587   73732 kubeadm.go:310] 
	I1105 19:15:47.378635   73732 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:15:47.378645   73732 kubeadm.go:310] 
	I1105 19:15:47.378711   73732 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:15:47.378718   73732 kubeadm.go:310] 
	I1105 19:15:47.378760   73732 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:15:47.378813   73732 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:15:47.378856   73732 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:15:47.378860   73732 kubeadm.go:310] 
	I1105 19:15:47.378910   73732 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:15:47.378913   73732 kubeadm.go:310] 
	I1105 19:15:47.378955   73732 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:15:47.378959   73732 kubeadm.go:310] 
	I1105 19:15:47.379030   73732 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:15:47.379114   73732 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:15:47.379195   73732 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:15:47.379203   73732 kubeadm.go:310] 
	I1105 19:15:47.379320   73732 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:15:47.379427   73732 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:15:47.379442   73732 kubeadm.go:310] 
	I1105 19:15:47.379559   73732 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.379718   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:15:47.379762   73732 kubeadm.go:310] 	--control-plane 
	I1105 19:15:47.379770   73732 kubeadm.go:310] 
	I1105 19:15:47.379844   73732 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:15:47.379851   73732 kubeadm.go:310] 
	I1105 19:15:47.379977   73732 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.380150   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:15:47.380167   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:15:47.380174   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:15:47.381714   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:15:47.382944   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:15:47.394080   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:15:47.411715   73732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:15:47.411773   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.411821   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-271881 minikube.k8s.io/updated_at=2024_11_05T19_15_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=embed-certs-271881 minikube.k8s.io/primary=true
	I1105 19:15:47.439084   73732 ops.go:34] apiserver oom_adj: -16
	I1105 19:15:47.601691   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.348094   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:49.847296   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:48.102103   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:48.602767   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.101780   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.601826   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.101976   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.602763   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.102779   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.601930   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.102574   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.241636   73732 kubeadm.go:1113] duration metric: took 4.829922813s to wait for elevateKubeSystemPrivileges
	I1105 19:15:52.241680   73732 kubeadm.go:394] duration metric: took 5m2.866246993s to StartCluster
	I1105 19:15:52.241704   73732 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.241801   73732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:15:52.244409   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.244716   73732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:15:52.244789   73732 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:15:52.244893   73732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-271881"
	I1105 19:15:52.244914   73732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-271881"
	I1105 19:15:52.244911   73732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-271881"
	I1105 19:15:52.244933   73732 addons.go:69] Setting metrics-server=true in profile "embed-certs-271881"
	I1105 19:15:52.244941   73732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-271881"
	I1105 19:15:52.244954   73732 addons.go:234] Setting addon metrics-server=true in "embed-certs-271881"
	W1105 19:15:52.244965   73732 addons.go:243] addon metrics-server should already be in state true
	I1105 19:15:52.244998   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1105 19:15:52.244925   73732 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:15:52.245001   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245065   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245404   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245422   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245436   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245455   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245464   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245543   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.246341   73732 out.go:177] * Verifying Kubernetes components...
	I1105 19:15:52.247801   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:15:52.261802   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I1105 19:15:52.262325   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.262955   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.263159   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.263591   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.264367   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.264413   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.265696   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I1105 19:15:52.265941   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I1105 19:15:52.266161   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266322   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266776   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266782   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266800   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.266803   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.267185   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267224   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267353   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.267804   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.267846   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.271094   73732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-271881"
	W1105 19:15:52.271117   73732 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:15:52.271147   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.271509   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.271554   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.284180   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I1105 19:15:52.284456   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1105 19:15:52.284703   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.284925   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.285248   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285261   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285355   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285363   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285578   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285727   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285766   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.285862   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.287834   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.288259   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.290341   73732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:15:52.290346   73732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:15:52.290695   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I1105 19:15:52.291040   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.291464   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.291479   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.291776   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.291974   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:15:52.291994   73732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:15:52.292015   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292054   73732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.292067   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:15:52.292079   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292355   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.292400   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.295296   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295650   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.295675   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295701   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295797   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.295969   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296102   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296247   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.296272   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.296305   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.296582   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.296714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296848   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296947   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.314049   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I1105 19:15:52.314561   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.315148   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.315168   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.315884   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.316080   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.318146   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.318465   73732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.318478   73732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:15:52.318496   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.321312   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321825   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.321850   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321885   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.322095   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.322238   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.322397   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.453762   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:15:52.483722   73732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493492   73732 node_ready.go:49] node "embed-certs-271881" has status "Ready":"True"
	I1105 19:15:52.493519   73732 node_ready.go:38] duration metric: took 9.757528ms for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493530   73732 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:52.508208   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:15:52.577925   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.589366   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:15:52.589389   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:15:52.612570   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:15:52.612593   73732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:15:52.645851   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.647686   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:52.647713   73732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:15:52.668865   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:53.246894   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246918   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.246923   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246950   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247230   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247277   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247305   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247323   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247338   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247349   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247331   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247368   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247378   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247710   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247739   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247746   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247779   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247800   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247811   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.269143   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.269165   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.269465   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.269479   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.269483   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.494717   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.494741   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495080   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495100   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495114   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.495123   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495348   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.495394   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495414   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495427   73732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-271881"
	I1105 19:15:53.497126   73732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:15:52.347616   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:54.352434   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:53.498891   73732 addons.go:510] duration metric: took 1.254108253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:15:54.518219   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:57.015647   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:56.846198   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:58.847684   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:59.514759   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:01.514818   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:02.515124   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.515148   73732 pod_ready.go:82] duration metric: took 10.006914802s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.515158   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519864   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.519889   73732 pod_ready.go:82] duration metric: took 4.723101ms for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519900   73732 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524948   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.524970   73732 pod_ready.go:82] duration metric: took 5.063029ms for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524979   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529710   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.529739   73732 pod_ready.go:82] duration metric: took 4.753888ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529750   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534282   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.534301   73732 pod_ready.go:82] duration metric: took 4.544677ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534309   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912364   73732 pod_ready.go:93] pod "kube-proxy-nfxcj" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.912387   73732 pod_ready.go:82] duration metric: took 378.071939ms for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912397   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311793   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:03.311816   73732 pod_ready.go:82] duration metric: took 399.412502ms for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311822   73732 pod_ready.go:39] duration metric: took 10.818282425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:03.311836   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:16:03.311883   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:16:03.327913   73732 api_server.go:72] duration metric: took 11.083157176s to wait for apiserver process to appear ...
	I1105 19:16:03.327947   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:16:03.327968   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:16:03.334499   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:16:03.335530   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:16:03.335550   73732 api_server.go:131] duration metric: took 7.596072ms to wait for apiserver health ...
	I1105 19:16:03.335558   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:16:03.514782   73732 system_pods.go:59] 9 kube-system pods found
	I1105 19:16:03.514813   73732 system_pods.go:61] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.514820   73732 system_pods.go:61] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.514825   73732 system_pods.go:61] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.514830   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.514835   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.514840   73732 system_pods.go:61] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.514844   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.514854   73732 system_pods.go:61] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.514859   73732 system_pods.go:61] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.514868   73732 system_pods.go:74] duration metric: took 179.304519ms to wait for pod list to return data ...
	I1105 19:16:03.514877   73732 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:16:03.712690   73732 default_sa.go:45] found service account: "default"
	I1105 19:16:03.712719   73732 default_sa.go:55] duration metric: took 197.831177ms for default service account to be created ...
	I1105 19:16:03.712731   73732 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:16:03.916858   73732 system_pods.go:86] 9 kube-system pods found
	I1105 19:16:03.916893   73732 system_pods.go:89] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.916902   73732 system_pods.go:89] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.916908   73732 system_pods.go:89] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.916913   73732 system_pods.go:89] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.916918   73732 system_pods.go:89] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.916921   73732 system_pods.go:89] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.916924   73732 system_pods.go:89] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.916934   73732 system_pods.go:89] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.916941   73732 system_pods.go:89] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.916953   73732 system_pods.go:126] duration metric: took 204.215711ms to wait for k8s-apps to be running ...
	I1105 19:16:03.916963   73732 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:16:03.917019   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:03.931369   73732 system_svc.go:56] duration metric: took 14.397556ms WaitForService to wait for kubelet
	I1105 19:16:03.931407   73732 kubeadm.go:582] duration metric: took 11.686653516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:16:03.931454   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:16:04.111904   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:16:04.111928   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:16:04.111937   73732 node_conditions.go:105] duration metric: took 180.475073ms to run NodePressure ...
	I1105 19:16:04.111947   73732 start.go:241] waiting for startup goroutines ...
	I1105 19:16:04.111953   73732 start.go:246] waiting for cluster config update ...
	I1105 19:16:04.111962   73732 start.go:255] writing updated cluster config ...
	I1105 19:16:04.112197   73732 ssh_runner.go:195] Run: rm -f paused
	I1105 19:16:04.158775   73732 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:16:04.160801   73732 out.go:177] * Done! kubectl is now configured to use "embed-certs-271881" cluster and "default" namespace by default
	I1105 19:16:01.346039   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:03.346369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:05.846866   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:08.346383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:10.346570   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:12.347171   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:14.846335   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.840591   73496 pod_ready.go:82] duration metric: took 4m0.000143963s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	E1105 19:16:17.840620   73496 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:16:17.840649   73496 pod_ready.go:39] duration metric: took 4m11.022533189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:17.840682   73496 kubeadm.go:597] duration metric: took 4m18.432062793s to restartPrimaryControlPlane
	W1105 19:16:17.840732   73496 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:16:17.840755   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:16:21.064069   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:16:21.064607   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:21.064798   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:26.065202   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:26.065410   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:36.065932   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:36.066151   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:43.960239   73496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.119460606s)
	I1105 19:16:43.960324   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:43.986199   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:16:43.999287   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:16:44.013653   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:16:44.013675   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:16:44.013718   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:16:44.026073   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:16:44.026140   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:16:44.038723   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:16:44.050880   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:16:44.050957   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:16:44.061696   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.071739   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:16:44.072301   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.084030   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:16:44.093217   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:16:44.093275   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:16:44.102494   73496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:16:44.267623   73496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:16:52.534375   73496 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:16:52.534458   73496 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:16:52.534569   73496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:16:52.534704   73496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:16:52.534834   73496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:16:52.534930   73496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:16:52.536666   73496 out.go:235]   - Generating certificates and keys ...
	I1105 19:16:52.536759   73496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:16:52.536836   73496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:16:52.536911   73496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:16:52.536963   73496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:16:52.537060   73496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:16:52.537145   73496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:16:52.537232   73496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:16:52.537286   73496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:16:52.537361   73496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:16:52.537455   73496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:16:52.537500   73496 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:16:52.537578   73496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:16:52.537648   73496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:16:52.537725   73496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:16:52.537797   73496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:16:52.537905   73496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:16:52.537988   73496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:16:52.538075   73496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:16:52.538136   73496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:16:52.539588   73496 out.go:235]   - Booting up control plane ...
	I1105 19:16:52.539669   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:16:52.539743   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:16:52.539800   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:16:52.539885   73496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:16:52.539987   73496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:16:52.540057   73496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:16:52.540206   73496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:16:52.540300   73496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:16:52.540367   73496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733469ms
	I1105 19:16:52.540447   73496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:16:52.540528   73496 kubeadm.go:310] [api-check] The API server is healthy after 5.001962829s
	I1105 19:16:52.540651   73496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:16:52.540806   73496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:16:52.540899   73496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:16:52.541094   73496 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-459223 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:16:52.541164   73496 kubeadm.go:310] [bootstrap-token] Using token: f0bzzt.jihwqjda853aoxrb
	I1105 19:16:52.543528   73496 out.go:235]   - Configuring RBAC rules ...
	I1105 19:16:52.543658   73496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:16:52.543777   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:16:52.543942   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:16:52.544072   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:16:52.544222   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:16:52.544327   73496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:16:52.544453   73496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:16:52.544493   73496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:16:52.544536   73496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:16:52.544542   73496 kubeadm.go:310] 
	I1105 19:16:52.544593   73496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:16:52.544599   73496 kubeadm.go:310] 
	I1105 19:16:52.544687   73496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:16:52.544701   73496 kubeadm.go:310] 
	I1105 19:16:52.544739   73496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:16:52.544795   73496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:16:52.544855   73496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:16:52.544881   73496 kubeadm.go:310] 
	I1105 19:16:52.544958   73496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:16:52.544971   73496 kubeadm.go:310] 
	I1105 19:16:52.545039   73496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:16:52.545049   73496 kubeadm.go:310] 
	I1105 19:16:52.545111   73496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:16:52.545193   73496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:16:52.545251   73496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:16:52.545257   73496 kubeadm.go:310] 
	I1105 19:16:52.545324   73496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:16:52.545403   73496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:16:52.545409   73496 kubeadm.go:310] 
	I1105 19:16:52.545480   73496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.545605   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:16:52.545638   73496 kubeadm.go:310] 	--control-plane 
	I1105 19:16:52.545648   73496 kubeadm.go:310] 
	I1105 19:16:52.545779   73496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:16:52.545794   73496 kubeadm.go:310] 
	I1105 19:16:52.545903   73496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.546059   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:16:52.546074   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:16:52.546083   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:16:52.548357   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:16:52.549732   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:16:52.560406   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:16:52.577268   73496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:16:52.577334   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:52.577373   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-459223 minikube.k8s.io/updated_at=2024_11_05T19_16_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=no-preload-459223 minikube.k8s.io/primary=true
	I1105 19:16:52.776299   73496 ops.go:34] apiserver oom_adj: -16
	I1105 19:16:52.776456   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.276618   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.777474   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.276726   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.777004   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.276725   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.777410   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.893941   73496 kubeadm.go:1113] duration metric: took 3.316665512s to wait for elevateKubeSystemPrivileges
	I1105 19:16:55.893984   73496 kubeadm.go:394] duration metric: took 4m56.532038314s to StartCluster
	I1105 19:16:55.894007   73496 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.894104   73496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:16:55.896620   73496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.896934   73496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:16:55.897120   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:16:55.897056   73496 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:16:55.897166   73496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-459223"
	I1105 19:16:55.897176   73496 addons.go:69] Setting default-storageclass=true in profile "no-preload-459223"
	I1105 19:16:55.897186   73496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-459223"
	I1105 19:16:55.897193   73496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-459223"
	I1105 19:16:55.897211   73496 addons.go:69] Setting metrics-server=true in profile "no-preload-459223"
	I1105 19:16:55.897231   73496 addons.go:234] Setting addon metrics-server=true in "no-preload-459223"
	W1105 19:16:55.897243   73496 addons.go:243] addon metrics-server should already be in state true
	I1105 19:16:55.897271   73496 host.go:66] Checking if "no-preload-459223" exists ...
	W1105 19:16:55.897195   73496 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:16:55.897323   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.897599   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897642   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897705   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897754   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897711   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897811   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.898341   73496 out.go:177] * Verifying Kubernetes components...
	I1105 19:16:55.899778   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:16:55.914218   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1105 19:16:55.914305   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1105 19:16:55.914726   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.914837   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.915283   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915305   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915391   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915418   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915642   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915757   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915804   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.916323   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.916367   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.916858   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1105 19:16:55.917296   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.917805   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.917832   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.918156   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.918678   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.918720   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.919527   73496 addons.go:234] Setting addon default-storageclass=true in "no-preload-459223"
	W1105 19:16:55.919549   73496 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:16:55.919576   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.919954   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.919996   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.932547   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I1105 19:16:55.933026   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.933588   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.933601   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.933918   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.934153   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.936094   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.937415   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I1105 19:16:55.937800   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.937812   73496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:16:55.938312   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.938324   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.938420   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I1105 19:16:55.938661   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.938816   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.938867   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:16:55.938894   73496 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:16:55.938918   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.939014   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.939350   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.939362   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.939855   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.940281   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.940310   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.940959   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.942661   73496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:16:55.942797   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943216   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.943392   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943422   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.943588   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.943842   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.944078   73496 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:55.944083   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.944096   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:16:55.944114   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.947574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.947767   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.947789   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.948125   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.948249   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.948343   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.948424   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.987691   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I1105 19:16:55.988131   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.988714   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.988739   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.989127   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.989325   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.991207   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.991453   73496 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:55.991472   73496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:16:55.991492   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.994362   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994800   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.994846   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994938   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.995145   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.995315   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.996088   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:56.109142   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:16:56.126382   73496 node_ready.go:35] waiting up to 6m0s for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138050   73496 node_ready.go:49] node "no-preload-459223" has status "Ready":"True"
	I1105 19:16:56.138076   73496 node_ready.go:38] duration metric: took 11.661265ms for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138087   73496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:56.143325   73496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:56.230205   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:16:56.230228   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:16:56.232603   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:56.259360   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:16:56.259388   73496 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:16:56.268694   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:56.321334   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:56.321364   73496 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:16:56.387409   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:57.010417   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010441   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010496   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010522   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010748   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.010795   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010804   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010812   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010818   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010817   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010830   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010838   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010843   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.011143   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011147   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011205   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011221   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.011209   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011298   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074127   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.074148   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.074476   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.074543   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074508   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.135875   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.135898   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136259   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136280   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136278   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136291   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.136308   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136703   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136747   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136757   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136767   73496 addons.go:475] Verifying addon metrics-server=true in "no-preload-459223"
	I1105 19:16:57.138699   73496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:16:56.066834   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:56.067140   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:57.140755   73496 addons.go:510] duration metric: took 1.243699533s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:16:58.154376   73496 pod_ready.go:103] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:17:00.149838   73496 pod_ready.go:93] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:00.149864   73496 pod_ready.go:82] duration metric: took 4.006514005s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:00.149876   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156460   73496 pod_ready.go:93] pod "kube-apiserver-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.156486   73496 pod_ready.go:82] duration metric: took 1.006602068s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156499   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160598   73496 pod_ready.go:93] pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.160618   73496 pod_ready.go:82] duration metric: took 4.110322ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160631   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164461   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.164482   73496 pod_ready.go:82] duration metric: took 3.842329ms for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164492   73496 pod_ready.go:39] duration metric: took 5.026393011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:17:01.164509   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:17:01.164566   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:17:01.183307   73496 api_server.go:72] duration metric: took 5.286331754s to wait for apiserver process to appear ...
	I1105 19:17:01.183338   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:17:01.183357   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:17:01.189083   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:17:01.190439   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:17:01.190469   73496 api_server.go:131] duration metric: took 7.123058ms to wait for apiserver health ...
	I1105 19:17:01.190479   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:17:01.198820   73496 system_pods.go:59] 9 kube-system pods found
	I1105 19:17:01.198854   73496 system_pods.go:61] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198862   73496 system_pods.go:61] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198869   73496 system_pods.go:61] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.198873   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.198879   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.198883   73496 system_pods.go:61] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.198887   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.198893   73496 system_pods.go:61] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.198896   73496 system_pods.go:61] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.198903   73496 system_pods.go:74] duration metric: took 8.418414ms to wait for pod list to return data ...
	I1105 19:17:01.198913   73496 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:17:01.202229   73496 default_sa.go:45] found service account: "default"
	I1105 19:17:01.202251   73496 default_sa.go:55] duration metric: took 3.332652ms for default service account to be created ...
	I1105 19:17:01.202260   73496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:17:01.208774   73496 system_pods.go:86] 9 kube-system pods found
	I1105 19:17:01.208803   73496 system_pods.go:89] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208811   73496 system_pods.go:89] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208817   73496 system_pods.go:89] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.208821   73496 system_pods.go:89] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.208825   73496 system_pods.go:89] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.208828   73496 system_pods.go:89] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.208833   73496 system_pods.go:89] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.208838   73496 system_pods.go:89] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.208842   73496 system_pods.go:89] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.208848   73496 system_pods.go:126] duration metric: took 6.584071ms to wait for k8s-apps to be running ...
	I1105 19:17:01.208856   73496 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:17:01.208898   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:01.225005   73496 system_svc.go:56] duration metric: took 16.138051ms WaitForService to wait for kubelet
	I1105 19:17:01.225038   73496 kubeadm.go:582] duration metric: took 5.328067688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:17:01.225062   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:17:01.347771   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:17:01.347799   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:17:01.347813   73496 node_conditions.go:105] duration metric: took 122.746343ms to run NodePressure ...
	I1105 19:17:01.347826   73496 start.go:241] waiting for startup goroutines ...
	I1105 19:17:01.347834   73496 start.go:246] waiting for cluster config update ...
	I1105 19:17:01.347846   73496 start.go:255] writing updated cluster config ...
	I1105 19:17:01.348126   73496 ssh_runner.go:195] Run: rm -f paused
	I1105 19:17:01.396396   73496 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:17:01.398528   73496 out.go:177] * Done! kubectl is now configured to use "no-preload-459223" cluster and "default" namespace by default
	I1105 19:17:36.069129   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:17:36.069396   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:17:36.069426   74485 kubeadm.go:310] 
	I1105 19:17:36.069489   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:17:36.069572   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:17:36.069591   74485 kubeadm.go:310] 
	I1105 19:17:36.069638   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:17:36.069699   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:17:36.069843   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:17:36.069852   74485 kubeadm.go:310] 
	I1105 19:17:36.069967   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:17:36.070017   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:17:36.070067   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:17:36.070074   74485 kubeadm.go:310] 
	I1105 19:17:36.070216   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:17:36.070328   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:17:36.070345   74485 kubeadm.go:310] 
	I1105 19:17:36.070486   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:17:36.070622   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:17:36.070690   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:17:36.070758   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:17:36.070767   74485 kubeadm.go:310] 
	I1105 19:17:36.071471   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:17:36.071558   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:17:36.071652   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1105 19:17:36.071791   74485 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1105 19:17:36.071838   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:17:36.527864   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:36.543211   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:17:36.552656   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:17:36.552676   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:17:36.552734   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:17:36.562296   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:17:36.562360   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:17:36.571759   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:17:36.580534   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:17:36.580586   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:17:36.590320   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.599165   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:17:36.599235   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.608340   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:17:36.616935   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:17:36.616986   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:17:36.625948   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:17:36.843267   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:19:32.770686   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:19:32.770828   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1105 19:19:32.772504   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:19:32.772564   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:19:32.772656   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:19:32.772784   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:19:32.772893   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:19:32.772971   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:19:32.774648   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:19:32.774726   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:19:32.774804   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:19:32.774902   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:19:32.775012   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:19:32.775144   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:19:32.775223   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:19:32.775307   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:19:32.775397   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:19:32.775487   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:19:32.775597   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:19:32.775651   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:19:32.775728   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:19:32.775796   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:19:32.775864   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:19:32.775961   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:19:32.776041   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:19:32.776175   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:19:32.776281   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:19:32.776330   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:19:32.776417   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:19:32.777837   74485 out.go:235]   - Booting up control plane ...
	I1105 19:19:32.777940   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:19:32.778032   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:19:32.778134   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:19:32.778248   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:19:32.778489   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:19:32.778563   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:19:32.778652   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.778960   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779080   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779302   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779399   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779663   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779766   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779990   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780051   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.780241   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780260   74485 kubeadm.go:310] 
	I1105 19:19:32.780325   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:19:32.780381   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:19:32.780391   74485 kubeadm.go:310] 
	I1105 19:19:32.780438   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:19:32.780486   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:19:32.780627   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:19:32.780639   74485 kubeadm.go:310] 
	I1105 19:19:32.780748   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:19:32.780790   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:19:32.780819   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:19:32.780825   74485 kubeadm.go:310] 
	I1105 19:19:32.780961   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:19:32.781048   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:19:32.781055   74485 kubeadm.go:310] 
	I1105 19:19:32.781144   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:19:32.781225   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:19:32.781293   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:19:32.781394   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:19:32.781475   74485 kubeadm.go:394] duration metric: took 8m1.792270232s to StartCluster
	I1105 19:19:32.781485   74485 kubeadm.go:310] 
	I1105 19:19:32.781522   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:19:32.781589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:19:32.825435   74485 cri.go:89] found id: ""
	I1105 19:19:32.825465   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.825475   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:19:32.825482   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:19:32.825543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:19:32.859245   74485 cri.go:89] found id: ""
	I1105 19:19:32.859275   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.859286   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:19:32.859293   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:19:32.859355   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:19:32.890801   74485 cri.go:89] found id: ""
	I1105 19:19:32.890833   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.890844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:19:32.890851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:19:32.890919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:19:32.925244   74485 cri.go:89] found id: ""
	I1105 19:19:32.925273   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.925280   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:19:32.925287   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:19:32.925352   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:19:32.959091   74485 cri.go:89] found id: ""
	I1105 19:19:32.959118   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.959129   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:19:32.959137   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:19:32.959191   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:19:32.990230   74485 cri.go:89] found id: ""
	I1105 19:19:32.990264   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.990276   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:19:32.990284   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:19:32.990343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:19:33.027461   74485 cri.go:89] found id: ""
	I1105 19:19:33.027494   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.027505   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:19:33.027512   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:19:33.027574   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:19:33.070819   74485 cri.go:89] found id: ""
	I1105 19:19:33.070847   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.070858   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:19:33.070869   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:19:33.070883   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:19:33.122580   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:19:33.122615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:19:33.136015   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:19:33.136043   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:19:33.213727   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:19:33.213750   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:19:33.213762   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:19:33.324287   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:19:33.324333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1105 19:19:33.384732   74485 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1105 19:19:33.384785   74485 out.go:270] * 
	W1105 19:19:33.384844   74485 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.384857   74485 out.go:270] * 
	W1105 19:19:33.385632   74485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:19:33.388860   74485 out.go:201] 
	W1105 19:19:33.390328   74485 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.390366   74485 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1105 19:19:33.390393   74485 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1105 19:19:33.391785   74485 out.go:201] 
	
	
	==> CRI-O <==
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.379373777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834763379351504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d219d76f-b582-42d9-8d58-9086d7fb5c82 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.380019056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a20db735-0156-413d-9d4a-d67d31fa2773 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.380095987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a20db735-0156-413d-9d4a-d67d31fa2773 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.380314019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab,PodSandboxId:53d95ad8175d2c3e2a0547d1e54ab7d716d92f9f6bb34d3b393fbf1e44fc3dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218398362023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xx9wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17910730-8b50-4223-8af5-82b701aa2f96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af,PodSandboxId:9c68653e627573ac6486fdd226956920611b4faf77bc00b25cbb0e4c704fe203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218148563926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gl9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bee65a6-f684-4675-b356-62602fa628c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d,PodSandboxId:c565fa80a6aaf317ad0a1e4a15b4dd21f57b5d04f455a10bcfc366451de4d05d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1730834217463475970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4743de2f-37ed-4b92-ac4e-4bcbff5897b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a,PodSandboxId:2e59b18e4713ed733f5c8b56a24b6afdd6659fd83fd02f8790941a1a64001db9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730834217239521206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txq44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4a537b-e4cc-4254-9a22-679795366362,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5,PodSandboxId:4ee8c4b268f91471c4186d36d454da0207df96223ef74f008b0f172b6965f7da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834206622064588,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5b9e61ccfc5846d0b9bbd773dc071,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795,PodSandboxId:2722b4838dace6612ede6aacfd690bfa3ad6ea7383a0a4ae5436bb7f0b82ce1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173083420657764
6503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec,PodSandboxId:769f2d218ba80fd7d1999b1f5008c9e15b825a554d76b09f545800c6fbfc4fdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834206543391485,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcfc5f9c14a629c1363a718710ab4809,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda,PodSandboxId:81812c8fa67882adaf70636f9e0601298b63deb80ec077a0c3d97f57bfd56719,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834206540810236,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e114f84917815ecea095e683e62042c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53,PodSandboxId:dcd5be362a6c5770f7d6fe56e370839847e1dce1b092bbbd3c55b5162b656551,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833921559164343,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a20db735-0156-413d-9d4a-d67d31fa2773 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.418387892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c94729c-52e7-475b-a5db-7cbfe83dfe7c name=/runtime.v1.RuntimeService/Version
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.418461312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c94729c-52e7-475b-a5db-7cbfe83dfe7c name=/runtime.v1.RuntimeService/Version
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.419385431Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5859d8a-f090-48ee-9349-1b96cd74def3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.419816535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834763419792956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5859d8a-f090-48ee-9349-1b96cd74def3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.420329299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50e138d5-b4a0-4683-9826-c61242d6f631 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.420379297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50e138d5-b4a0-4683-9826-c61242d6f631 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.420585123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab,PodSandboxId:53d95ad8175d2c3e2a0547d1e54ab7d716d92f9f6bb34d3b393fbf1e44fc3dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218398362023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xx9wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17910730-8b50-4223-8af5-82b701aa2f96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af,PodSandboxId:9c68653e627573ac6486fdd226956920611b4faf77bc00b25cbb0e4c704fe203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218148563926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gl9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bee65a6-f684-4675-b356-62602fa628c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d,PodSandboxId:c565fa80a6aaf317ad0a1e4a15b4dd21f57b5d04f455a10bcfc366451de4d05d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1730834217463475970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4743de2f-37ed-4b92-ac4e-4bcbff5897b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a,PodSandboxId:2e59b18e4713ed733f5c8b56a24b6afdd6659fd83fd02f8790941a1a64001db9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730834217239521206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txq44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4a537b-e4cc-4254-9a22-679795366362,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5,PodSandboxId:4ee8c4b268f91471c4186d36d454da0207df96223ef74f008b0f172b6965f7da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834206622064588,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5b9e61ccfc5846d0b9bbd773dc071,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795,PodSandboxId:2722b4838dace6612ede6aacfd690bfa3ad6ea7383a0a4ae5436bb7f0b82ce1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173083420657764
6503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec,PodSandboxId:769f2d218ba80fd7d1999b1f5008c9e15b825a554d76b09f545800c6fbfc4fdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834206543391485,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcfc5f9c14a629c1363a718710ab4809,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda,PodSandboxId:81812c8fa67882adaf70636f9e0601298b63deb80ec077a0c3d97f57bfd56719,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834206540810236,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e114f84917815ecea095e683e62042c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53,PodSandboxId:dcd5be362a6c5770f7d6fe56e370839847e1dce1b092bbbd3c55b5162b656551,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833921559164343,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50e138d5-b4a0-4683-9826-c61242d6f631 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.462262495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=792e98a1-c212-43d7-bf4b-27a394aa0014 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.462355556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=792e98a1-c212-43d7-bf4b-27a394aa0014 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.463674712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba6118f8-8d56-41d7-9d57-afb7ee71f154 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.464134494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834763464107090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba6118f8-8d56-41d7-9d57-afb7ee71f154 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.464693192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8a11913-4937-40a2-8c15-b1e78a9145a0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.464791130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8a11913-4937-40a2-8c15-b1e78a9145a0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.464998291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab,PodSandboxId:53d95ad8175d2c3e2a0547d1e54ab7d716d92f9f6bb34d3b393fbf1e44fc3dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218398362023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xx9wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17910730-8b50-4223-8af5-82b701aa2f96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af,PodSandboxId:9c68653e627573ac6486fdd226956920611b4faf77bc00b25cbb0e4c704fe203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218148563926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gl9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bee65a6-f684-4675-b356-62602fa628c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d,PodSandboxId:c565fa80a6aaf317ad0a1e4a15b4dd21f57b5d04f455a10bcfc366451de4d05d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1730834217463475970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4743de2f-37ed-4b92-ac4e-4bcbff5897b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a,PodSandboxId:2e59b18e4713ed733f5c8b56a24b6afdd6659fd83fd02f8790941a1a64001db9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730834217239521206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txq44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4a537b-e4cc-4254-9a22-679795366362,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5,PodSandboxId:4ee8c4b268f91471c4186d36d454da0207df96223ef74f008b0f172b6965f7da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834206622064588,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5b9e61ccfc5846d0b9bbd773dc071,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795,PodSandboxId:2722b4838dace6612ede6aacfd690bfa3ad6ea7383a0a4ae5436bb7f0b82ce1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173083420657764
6503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec,PodSandboxId:769f2d218ba80fd7d1999b1f5008c9e15b825a554d76b09f545800c6fbfc4fdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834206543391485,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcfc5f9c14a629c1363a718710ab4809,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda,PodSandboxId:81812c8fa67882adaf70636f9e0601298b63deb80ec077a0c3d97f57bfd56719,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834206540810236,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e114f84917815ecea095e683e62042c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53,PodSandboxId:dcd5be362a6c5770f7d6fe56e370839847e1dce1b092bbbd3c55b5162b656551,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833921559164343,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8a11913-4937-40a2-8c15-b1e78a9145a0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.497313957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92cdcc5a-3ec5-489e-8877-a7650e4d10a9 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.497390947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92cdcc5a-3ec5-489e-8877-a7650e4d10a9 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.498291073Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9274fa95-774d-4cfa-80de-eb8a67e3f8db name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.498646371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834763498624814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9274fa95-774d-4cfa-80de-eb8a67e3f8db name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.499133988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91fe0d09-88df-4f7f-b2f4-e1fda26084ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.499209596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91fe0d09-88df-4f7f-b2f4-e1fda26084ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:26:03 no-preload-459223 crio[709]: time="2024-11-05 19:26:03.499410336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab,PodSandboxId:53d95ad8175d2c3e2a0547d1e54ab7d716d92f9f6bb34d3b393fbf1e44fc3dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218398362023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xx9wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17910730-8b50-4223-8af5-82b701aa2f96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af,PodSandboxId:9c68653e627573ac6486fdd226956920611b4faf77bc00b25cbb0e4c704fe203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218148563926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gl9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bee65a6-f684-4675-b356-62602fa628c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d,PodSandboxId:c565fa80a6aaf317ad0a1e4a15b4dd21f57b5d04f455a10bcfc366451de4d05d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1730834217463475970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4743de2f-37ed-4b92-ac4e-4bcbff5897b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a,PodSandboxId:2e59b18e4713ed733f5c8b56a24b6afdd6659fd83fd02f8790941a1a64001db9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730834217239521206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txq44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4a537b-e4cc-4254-9a22-679795366362,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5,PodSandboxId:4ee8c4b268f91471c4186d36d454da0207df96223ef74f008b0f172b6965f7da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834206622064588,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5b9e61ccfc5846d0b9bbd773dc071,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795,PodSandboxId:2722b4838dace6612ede6aacfd690bfa3ad6ea7383a0a4ae5436bb7f0b82ce1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173083420657764
6503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec,PodSandboxId:769f2d218ba80fd7d1999b1f5008c9e15b825a554d76b09f545800c6fbfc4fdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834206543391485,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcfc5f9c14a629c1363a718710ab4809,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda,PodSandboxId:81812c8fa67882adaf70636f9e0601298b63deb80ec077a0c3d97f57bfd56719,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834206540810236,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e114f84917815ecea095e683e62042c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53,PodSandboxId:dcd5be362a6c5770f7d6fe56e370839847e1dce1b092bbbd3c55b5162b656551,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833921559164343,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91fe0d09-88df-4f7f-b2f4-e1fda26084ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8299ec71cd6b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   53d95ad8175d2       coredns-7c65d6cfc9-xx9wl
	06944d69e896b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   9c68653e62757       coredns-7c65d6cfc9-gl9th
	a9107fec3c6ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c565fa80a6aaf       storage-provisioner
	fef03f0dffe73       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   2e59b18e4713e       kube-proxy-txq44
	e0e6f9312034b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   4ee8c4b268f91       kube-controller-manager-no-preload-459223
	e508df75b1e52       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   2722b4838dace       kube-apiserver-no-preload-459223
	23716e18606f9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   769f2d218ba80       kube-scheduler-no-preload-459223
	fe5cad52df568       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   81812c8fa6788       etcd-no-preload-459223
	19f1612ca8def       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   dcd5be362a6c5       kube-apiserver-no-preload-459223
	
	
	==> coredns [06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-459223
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-459223
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=no-preload-459223
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T19_16_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 19:16:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-459223
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 19:26:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 19:22:06 +0000   Tue, 05 Nov 2024 19:16:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 19:22:06 +0000   Tue, 05 Nov 2024 19:16:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 19:22:06 +0000   Tue, 05 Nov 2024 19:16:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 19:22:06 +0000   Tue, 05 Nov 2024 19:16:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.101
	  Hostname:    no-preload-459223
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1674e32c04b493ead7da91f37718f8a
	  System UUID:                b1674e32-c04b-493e-ad7d-a91f37718f8a
	  Boot ID:                    a9004ea1-1fbf-4031-a350-a672fb92ac60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gl9th                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 coredns-7c65d6cfc9-xx9wl                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 etcd-no-preload-459223                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m12s
	  kube-system                 kube-apiserver-no-preload-459223             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-controller-manager-no-preload-459223    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-proxy-txq44                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 kube-scheduler-no-preload-459223             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 metrics-server-6867b74b74-qbgx4              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m7s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m12s  kubelet          Node no-preload-459223 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m12s  kubelet          Node no-preload-459223 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m12s  kubelet          Node no-preload-459223 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node no-preload-459223 event: Registered Node no-preload-459223 in Controller
	
	
	==> dmesg <==
	[  +0.041727] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.227125] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.936410] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.536117] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.310713] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.060096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058668] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.185471] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.124899] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.280956] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[ +15.404763] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.059391] kauditd_printk_skb: 130 callbacks suppressed
	[Nov 5 19:12] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +4.014361] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.347496] kauditd_printk_skb: 55 callbacks suppressed
	[  +6.217564] kauditd_printk_skb: 25 callbacks suppressed
	[Nov 5 19:16] systemd-fstab-generator[3091]: Ignoring "noauto" option for root device
	[  +0.061361] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.000648] systemd-fstab-generator[3407]: Ignoring "noauto" option for root device
	[  +0.081673] kauditd_printk_skb: 52 callbacks suppressed
	[  +4.333170] systemd-fstab-generator[3526]: Ignoring "noauto" option for root device
	[  +1.183763] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 5 19:17] kauditd_printk_skb: 66 callbacks suppressed
	
	
	==> etcd [fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda] <==
	{"level":"info","ts":"2024-11-05T19:16:46.881238Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T19:16:46.881270Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9dd5856f1db18b5a","local-member-id":"a006cd7aeaf5eb83","added-peer-id":"a006cd7aeaf5eb83","added-peer-peer-urls":["https://192.168.72.101:2380"]}
	{"level":"info","ts":"2024-11-05T19:16:46.881406Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-11-05T19:16:46.881589Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.101:2380"}
	{"level":"info","ts":"2024-11-05T19:16:46.881639Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.101:2380"}
	{"level":"info","ts":"2024-11-05T19:16:47.637817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 is starting a new election at term 1"}
	{"level":"info","ts":"2024-11-05T19:16:47.637925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-11-05T19:16:47.637971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 received MsgPreVoteResp from a006cd7aeaf5eb83 at term 1"}
	{"level":"info","ts":"2024-11-05T19:16:47.638022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 became candidate at term 2"}
	{"level":"info","ts":"2024-11-05T19:16:47.638052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 received MsgVoteResp from a006cd7aeaf5eb83 at term 2"}
	{"level":"info","ts":"2024-11-05T19:16:47.638121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 became leader at term 2"}
	{"level":"info","ts":"2024-11-05T19:16:47.638155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a006cd7aeaf5eb83 elected leader a006cd7aeaf5eb83 at term 2"}
	{"level":"info","ts":"2024-11-05T19:16:47.642911Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:16:47.646969Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a006cd7aeaf5eb83","local-member-attributes":"{Name:no-preload-459223 ClientURLs:[https://192.168.72.101:2379]}","request-path":"/0/members/a006cd7aeaf5eb83/attributes","cluster-id":"9dd5856f1db18b5a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T19:16:47.647128Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:16:47.647257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:16:47.648200Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:16:47.648996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-05T19:16:47.649054Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T19:16:47.649086Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-05T19:16:47.649564Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:16:47.655636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.101:2379"}
	{"level":"info","ts":"2024-11-05T19:16:47.658819Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9dd5856f1db18b5a","local-member-id":"a006cd7aeaf5eb83","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:16:47.704449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:16:47.714856Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:26:03 up 14 min,  0 users,  load average: 0.26, 0.21, 0.17
	Linux no-preload-459223 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53] <==
	W1105 19:16:41.803272       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:41.809024       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:41.809042       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.008683       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.034392       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.072804       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.128600       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.129861       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.154429       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.159117       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.171923       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.183306       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.244880       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.260464       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.260547       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.281594       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.288508       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.344219       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.347974       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.452160       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.460027       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.572281       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.671534       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.720454       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.816836       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795] <==
	E1105 19:21:50.095335       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1105 19:21:50.095433       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:21:50.096487       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:21:50.096516       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:22:50.096645       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:22:50.096821       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1105 19:22:50.096864       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:22:50.096879       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1105 19:22:50.097959       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:22:50.098011       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:24:50.098406       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:24:50.098553       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1105 19:24:50.098807       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:24:50.098958       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:24:50.099791       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:24:50.100968       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5] <==
	E1105 19:20:56.087428       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:20:56.553539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:21:26.093593       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:21:26.561241       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:21:56.099716       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:21:56.569168       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:22:06.367188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-459223"
	E1105 19:22:26.106002       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:22:26.576570       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:22:56.112842       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:22:56.584255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:22:58.864004       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="321.161µs"
	I1105 19:23:10.861441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="304.426µs"
	E1105 19:23:26.120108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:23:26.593520       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:23:56.126586       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:23:56.600643       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:24:26.133324       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:24:26.608093       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:24:56.140086       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:24:56.620253       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:25:26.148576       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:25:26.630087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:25:56.154825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:25:56.638448       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 19:16:57.580648       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 19:16:57.592492       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.101"]
	E1105 19:16:57.592574       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 19:16:57.641919       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 19:16:57.641979       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 19:16:57.642018       1 server_linux.go:169] "Using iptables Proxier"
	I1105 19:16:57.644364       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 19:16:57.644656       1 server.go:483] "Version info" version="v1.31.2"
	I1105 19:16:57.644682       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:16:57.647555       1 config.go:199] "Starting service config controller"
	I1105 19:16:57.647603       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 19:16:57.647634       1 config.go:105] "Starting endpoint slice config controller"
	I1105 19:16:57.647658       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 19:16:57.648223       1 config.go:328] "Starting node config controller"
	I1105 19:16:57.648253       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 19:16:57.748005       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 19:16:57.748091       1 shared_informer.go:320] Caches are synced for service config
	I1105 19:16:57.748674       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec] <==
	W1105 19:16:49.958927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 19:16:49.958963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:49.988674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 19:16:49.988778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.048725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1105 19:16:50.048819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.059647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 19:16:50.059856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.074308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 19:16:50.074383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.090358       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 19:16:50.090439       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1105 19:16:50.197143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 19:16:50.197203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.204798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1105 19:16:50.204843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.204891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 19:16:50.204913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.280014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1105 19:16:50.280063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.290021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 19:16:50.290068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.290509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1105 19:16:50.290581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1105 19:16:53.219907       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 19:24:55 no-preload-459223 kubelet[3414]: E1105 19:24:55.847885    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:25:02 no-preload-459223 kubelet[3414]: E1105 19:25:02.012162    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834702011697419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:02 no-preload-459223 kubelet[3414]: E1105 19:25:02.012209    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834702011697419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:09 no-preload-459223 kubelet[3414]: E1105 19:25:09.848691    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:25:12 no-preload-459223 kubelet[3414]: E1105 19:25:12.013902    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834712013445025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:12 no-preload-459223 kubelet[3414]: E1105 19:25:12.013938    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834712013445025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:22 no-preload-459223 kubelet[3414]: E1105 19:25:22.015090    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834722014715593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:22 no-preload-459223 kubelet[3414]: E1105 19:25:22.015137    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834722014715593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:22 no-preload-459223 kubelet[3414]: E1105 19:25:22.847096    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:25:32 no-preload-459223 kubelet[3414]: E1105 19:25:32.017223    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834732016896236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:32 no-preload-459223 kubelet[3414]: E1105 19:25:32.017246    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834732016896236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:34 no-preload-459223 kubelet[3414]: E1105 19:25:34.846657    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:25:42 no-preload-459223 kubelet[3414]: E1105 19:25:42.018403    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834742018015538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:42 no-preload-459223 kubelet[3414]: E1105 19:25:42.018508    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834742018015538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:45 no-preload-459223 kubelet[3414]: E1105 19:25:45.846807    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:25:51 no-preload-459223 kubelet[3414]: E1105 19:25:51.893004    3414 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 19:25:51 no-preload-459223 kubelet[3414]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 19:25:51 no-preload-459223 kubelet[3414]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 19:25:51 no-preload-459223 kubelet[3414]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 19:25:51 no-preload-459223 kubelet[3414]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 19:25:52 no-preload-459223 kubelet[3414]: E1105 19:25:52.019384    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834752019114881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:52 no-preload-459223 kubelet[3414]: E1105 19:25:52.019423    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834752019114881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:25:57 no-preload-459223 kubelet[3414]: E1105 19:25:57.847396    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:26:02 no-preload-459223 kubelet[3414]: E1105 19:26:02.020880    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834762020474957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:26:02 no-preload-459223 kubelet[3414]: E1105 19:26:02.020907    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834762020474957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d] <==
	I1105 19:16:57.643089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 19:16:57.665537       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 19:16:57.665704       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 19:16:57.674070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 19:16:57.674300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-459223_3a9bccea-688e-41f3-9501-f401ac215d00!
	I1105 19:16:57.674502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0cd88a65-6c4d-438c-9999-065e0d08e692", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-459223_3a9bccea-688e-41f3-9501-f401ac215d00 became leader
	I1105 19:16:57.774498       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-459223_3a9bccea-688e-41f3-9501-f401ac215d00!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-459223 -n no-preload-459223
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-459223 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-qbgx4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-459223 describe pod metrics-server-6867b74b74-qbgx4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-459223 describe pod metrics-server-6867b74b74-qbgx4: exit status 1 (63.988233ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-qbgx4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-459223 describe pod metrics-server-6867b74b74-qbgx4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:19:50.266027   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:20:05.694737   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:20:52.080914   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:21:13.330284   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:21:28.760069   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:21:37.461702   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:22:15.145039   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:22:21.924391   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:22:31.419352   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:22:55.008719   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:23:00.527396   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:23:01.007030   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:23:44.988923   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:24:06.920768   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:24:50.265966   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:25:52.080632   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:26:37.462320   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:27:21.924264   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:27:31.418587   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:27:55.009060   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:28:01.006633   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 2 (231.450385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-567666" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 2 (221.679435ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-567666 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-567666 logs -n 25: (1.507810155s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-929548 sudo cat                              | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo find                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo crio                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-929548                                       | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-537175 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | disable-driver-mounts-537175                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:04 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-459223             | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-271881            | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-608095  | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC | 05 Nov 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-459223                  | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-271881                 | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-567666        | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-608095       | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:15 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-567666             | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 19:07:52.649090   74485 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:07:52.649200   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649205   74485 out.go:358] Setting ErrFile to fd 2...
	I1105 19:07:52.649210   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649374   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:07:52.649909   74485 out.go:352] Setting JSON to false
	I1105 19:07:52.650785   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6615,"bootTime":1730827058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:07:52.650878   74485 start.go:139] virtualization: kvm guest
	I1105 19:07:52.652866   74485 out.go:177] * [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:07:52.654107   74485 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:07:52.654107   74485 notify.go:220] Checking for updates...
	I1105 19:07:52.655282   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:07:52.656379   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:07:52.657451   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:07:52.658694   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:07:52.659835   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:07:52.661251   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:07:52.661622   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.661660   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.677005   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I1105 19:07:52.677521   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.678096   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.678118   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.678489   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.678735   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.680466   74485 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1105 19:07:52.681734   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:07:52.682087   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.682139   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.697071   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1105 19:07:52.697503   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.697958   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.697980   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.698259   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.698439   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.732962   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 19:07:52.734079   74485 start.go:297] selected driver: kvm2
	I1105 19:07:52.734094   74485 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.734209   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:07:52.734912   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.735038   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:07:52.750214   74485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:07:52.750609   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:07:52.750641   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:07:52.750697   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:07:52.750745   74485 start.go:340] cluster config:
	{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.750877   74485 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.753288   74485 out.go:177] * Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	I1105 19:07:50.739209   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:53.811246   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:52.754354   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:07:52.754407   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 19:07:52.754425   74485 cache.go:56] Caching tarball of preloaded images
	I1105 19:07:52.754503   74485 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:07:52.754515   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 19:07:52.754610   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:07:52.754817   74485 start.go:360] acquireMachinesLock for old-k8s-version-567666: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:07:59.891257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:02.963247   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:09.043263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:12.115289   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:18.195275   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:21.267213   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:27.347251   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:30.419240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:36.499291   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:39.571255   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:45.651258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:48.723262   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:54.803265   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:57.875236   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:03.955241   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:07.027229   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:13.107258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:16.179257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:22.259227   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:25.331263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:31.411234   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:34.483240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:40.563258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:43.635253   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:49.715287   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:52.787276   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:58.867242   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:01.939296   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:08.019268   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:11.091350   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:17.171266   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:20.243245   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:23.247511   73732 start.go:364] duration metric: took 4m30.277290481s to acquireMachinesLock for "embed-certs-271881"
	I1105 19:10:23.247565   73732 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:23.247590   73732 fix.go:54] fixHost starting: 
	I1105 19:10:23.248173   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:23.248235   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:23.263573   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I1105 19:10:23.264016   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:23.264437   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:10:23.264461   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:23.264888   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:23.265122   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:23.265311   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:10:23.267000   73732 fix.go:112] recreateIfNeeded on embed-certs-271881: state=Stopped err=<nil>
	I1105 19:10:23.267031   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	W1105 19:10:23.267183   73732 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:23.269188   73732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-271881" ...
	I1105 19:10:23.244961   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:23.245021   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245327   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:10:23.245352   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245536   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:10:23.247352   73496 machine.go:96] duration metric: took 4m37.425023044s to provisionDockerMachine
	I1105 19:10:23.247393   73496 fix.go:56] duration metric: took 4m37.446801616s for fixHost
	I1105 19:10:23.247400   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 4m37.446835698s
	W1105 19:10:23.247424   73496 start.go:714] error starting host: provision: host is not running
	W1105 19:10:23.247522   73496 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1105 19:10:23.247534   73496 start.go:729] Will try again in 5 seconds ...
	I1105 19:10:23.270443   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Start
	I1105 19:10:23.270681   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring networks are active...
	I1105 19:10:23.271552   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network default is active
	I1105 19:10:23.271924   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network mk-embed-certs-271881 is active
	I1105 19:10:23.272243   73732 main.go:141] libmachine: (embed-certs-271881) Getting domain xml...
	I1105 19:10:23.273027   73732 main.go:141] libmachine: (embed-certs-271881) Creating domain...
	I1105 19:10:24.503219   73732 main.go:141] libmachine: (embed-certs-271881) Waiting to get IP...
	I1105 19:10:24.504067   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.504444   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.504503   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.504415   75020 retry.go:31] will retry after 194.539819ms: waiting for machine to come up
	I1105 19:10:24.701086   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.701552   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.701579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.701501   75020 retry.go:31] will retry after 361.371677ms: waiting for machine to come up
	I1105 19:10:25.064078   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.064484   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.064512   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.064433   75020 retry.go:31] will retry after 442.206433ms: waiting for machine to come up
	I1105 19:10:25.507981   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.508380   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.508405   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.508338   75020 retry.go:31] will retry after 573.453662ms: waiting for machine to come up
	I1105 19:10:26.083299   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.083727   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.083753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.083670   75020 retry.go:31] will retry after 686.210957ms: waiting for machine to come up
	I1105 19:10:26.771637   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.772070   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.772112   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.772062   75020 retry.go:31] will retry after 685.825223ms: waiting for machine to come up
	I1105 19:10:27.459230   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:27.459652   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:27.459677   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:27.459616   75020 retry.go:31] will retry after 1.167971852s: waiting for machine to come up
	I1105 19:10:28.247729   73496 start.go:360] acquireMachinesLock for no-preload-459223: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:10:28.629194   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:28.629526   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:28.629549   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:28.629488   75020 retry.go:31] will retry after 1.180980288s: waiting for machine to come up
	I1105 19:10:29.812048   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:29.812445   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:29.812475   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:29.812390   75020 retry.go:31] will retry after 1.527253183s: waiting for machine to come up
	I1105 19:10:31.342147   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:31.342519   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:31.342546   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:31.342467   75020 retry.go:31] will retry after 1.597485878s: waiting for machine to come up
	I1105 19:10:32.942141   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:32.942459   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:32.942505   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:32.942431   75020 retry.go:31] will retry after 2.416793509s: waiting for machine to come up
	I1105 19:10:35.360354   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:35.360717   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:35.360743   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:35.360674   75020 retry.go:31] will retry after 3.193637492s: waiting for machine to come up
	I1105 19:10:38.556294   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:38.556744   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:38.556775   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:38.556673   75020 retry.go:31] will retry after 3.819760443s: waiting for machine to come up
	I1105 19:10:42.380607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381140   73732 main.go:141] libmachine: (embed-certs-271881) Found IP for machine: 192.168.39.58
	I1105 19:10:42.381172   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has current primary IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381196   73732 main.go:141] libmachine: (embed-certs-271881) Reserving static IP address...
	I1105 19:10:42.381607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.381634   73732 main.go:141] libmachine: (embed-certs-271881) Reserved static IP address: 192.168.39.58
	I1105 19:10:42.381647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | skip adding static IP to network mk-embed-certs-271881 - found existing host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"}
	I1105 19:10:42.381671   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Getting to WaitForSSH function...
	I1105 19:10:42.381686   73732 main.go:141] libmachine: (embed-certs-271881) Waiting for SSH to be available...
	I1105 19:10:42.383908   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384306   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.384333   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384427   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH client type: external
	I1105 19:10:42.384458   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa (-rw-------)
	I1105 19:10:42.384486   73732 main.go:141] libmachine: (embed-certs-271881) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:10:42.384502   73732 main.go:141] libmachine: (embed-certs-271881) DBG | About to run SSH command:
	I1105 19:10:42.384510   73732 main.go:141] libmachine: (embed-certs-271881) DBG | exit 0
	I1105 19:10:42.506807   73732 main.go:141] libmachine: (embed-certs-271881) DBG | SSH cmd err, output: <nil>: 
	I1105 19:10:42.507217   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetConfigRaw
	I1105 19:10:42.507868   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.510314   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.510680   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510936   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/config.json ...
	I1105 19:10:42.511183   73732 machine.go:93] provisionDockerMachine start ...
	I1105 19:10:42.511203   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:42.511426   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.513759   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514111   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.514144   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514290   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.514473   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514654   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514827   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.514979   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.515191   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.515202   73732 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:10:42.619241   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:10:42.619273   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619517   73732 buildroot.go:166] provisioning hostname "embed-certs-271881"
	I1105 19:10:42.619555   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619735   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.622695   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623117   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.623146   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623304   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.623465   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623632   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623825   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.623957   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.624122   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.624135   73732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-271881 && echo "embed-certs-271881" | sudo tee /etc/hostname
	I1105 19:10:42.740722   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-271881
	
	I1105 19:10:42.740749   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.743579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.743922   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.743945   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.744160   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.744343   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744470   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.744756   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.744950   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.744972   73732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-271881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-271881/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-271881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:10:42.854869   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:42.854898   73732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:10:42.854926   73732 buildroot.go:174] setting up certificates
	I1105 19:10:42.854940   73732 provision.go:84] configureAuth start
	I1105 19:10:42.854948   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.855222   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.857913   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858228   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.858252   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858440   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.860753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861041   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.861062   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861222   73732 provision.go:143] copyHostCerts
	I1105 19:10:42.861274   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:10:42.861291   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:10:42.861385   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:10:42.861543   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:10:42.861556   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:10:42.861595   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:10:42.861671   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:10:42.861681   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:10:42.861713   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:10:42.861781   73732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.embed-certs-271881 san=[127.0.0.1 192.168.39.58 embed-certs-271881 localhost minikube]
	I1105 19:10:43.659372   74141 start.go:364] duration metric: took 3m39.006624915s to acquireMachinesLock for "default-k8s-diff-port-608095"
	I1105 19:10:43.659450   74141 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:43.659458   74141 fix.go:54] fixHost starting: 
	I1105 19:10:43.659814   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:43.659874   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:43.677604   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I1105 19:10:43.678132   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:43.678624   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:10:43.678649   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:43.679047   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:43.679250   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:10:43.679407   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:10:43.681036   74141 fix.go:112] recreateIfNeeded on default-k8s-diff-port-608095: state=Stopped err=<nil>
	I1105 19:10:43.681063   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	W1105 19:10:43.681194   74141 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:43.683110   74141 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-608095" ...
	I1105 19:10:43.684451   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Start
	I1105 19:10:43.684639   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring networks are active...
	I1105 19:10:43.685436   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network default is active
	I1105 19:10:43.685983   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network mk-default-k8s-diff-port-608095 is active
	I1105 19:10:43.686398   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Getting domain xml...
	I1105 19:10:43.687143   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Creating domain...
	I1105 19:10:43.044648   73732 provision.go:177] copyRemoteCerts
	I1105 19:10:43.044703   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:10:43.044730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.047120   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047506   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.047538   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047717   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.047886   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.048037   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.048186   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.129098   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:10:43.154632   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1105 19:10:43.179681   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 19:10:43.205598   73732 provision.go:87] duration metric: took 350.648117ms to configureAuth
	I1105 19:10:43.205622   73732 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:10:43.205822   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:10:43.205900   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.208446   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.208763   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.208799   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.209006   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.209190   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209489   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.209611   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.209828   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.209850   73732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:10:43.431540   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:10:43.431569   73732 machine.go:96] duration metric: took 920.370689ms to provisionDockerMachine
	I1105 19:10:43.431582   73732 start.go:293] postStartSetup for "embed-certs-271881" (driver="kvm2")
	I1105 19:10:43.431595   73732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:10:43.431617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.431912   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:10:43.431940   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.434821   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435170   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.435193   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435338   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.435532   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.435714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.435851   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.517391   73732 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:10:43.521532   73732 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:10:43.521553   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:10:43.521632   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:10:43.521721   73732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:10:43.521839   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:10:43.531045   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:43.556596   73732 start.go:296] duration metric: took 125.000692ms for postStartSetup
	I1105 19:10:43.556634   73732 fix.go:56] duration metric: took 20.309059136s for fixHost
	I1105 19:10:43.556663   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.558888   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559181   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.559220   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.559531   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559674   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.559934   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.560096   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.560106   73732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:10:43.659219   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833843.637801657
	
	I1105 19:10:43.659240   73732 fix.go:216] guest clock: 1730833843.637801657
	I1105 19:10:43.659247   73732 fix.go:229] Guest: 2024-11-05 19:10:43.637801657 +0000 UTC Remote: 2024-11-05 19:10:43.556637855 +0000 UTC m=+290.729857868 (delta=81.163802ms)
	I1105 19:10:43.659284   73732 fix.go:200] guest clock delta is within tolerance: 81.163802ms
	I1105 19:10:43.659290   73732 start.go:83] releasing machines lock for "embed-certs-271881", held for 20.411743975s
	I1105 19:10:43.659324   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.659589   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:43.662581   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663025   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.663058   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663214   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663907   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.664017   73732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:10:43.664057   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.664108   73732 ssh_runner.go:195] Run: cat /version.json
	I1105 19:10:43.664131   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.666998   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667059   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667365   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667395   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667424   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667438   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667543   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667638   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667897   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667968   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667996   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.668078   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.775067   73732 ssh_runner.go:195] Run: systemctl --version
	I1105 19:10:43.780892   73732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:10:43.919564   73732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:10:43.926362   73732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:10:43.926422   73732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:10:43.942359   73732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:10:43.942378   73732 start.go:495] detecting cgroup driver to use...
	I1105 19:10:43.942450   73732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:10:43.964650   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:10:43.980651   73732 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:10:43.980717   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:10:43.993988   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:10:44.007440   73732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:10:44.132040   73732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:10:44.314220   73732 docker.go:233] disabling docker service ...
	I1105 19:10:44.314294   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:10:44.337362   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:10:44.351277   73732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:10:44.485105   73732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:10:44.621596   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:10:44.636254   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:10:44.656530   73732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:10:44.656595   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.667156   73732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:10:44.667237   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.682233   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.692814   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.704688   73732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:10:44.721662   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.738629   73732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.754944   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.765089   73732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:10:44.774147   73732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:10:44.774210   73732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:10:44.786312   73732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:10:44.795892   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:44.926823   73732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:10:45.022945   73732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:10:45.023042   73732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:10:45.027389   73732 start.go:563] Will wait 60s for crictl version
	I1105 19:10:45.027451   73732 ssh_runner.go:195] Run: which crictl
	I1105 19:10:45.030701   73732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:10:45.067294   73732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:10:45.067410   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.094394   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.123459   73732 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:10:45.124645   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:45.127396   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.127794   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:45.127833   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.128104   73732 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 19:10:45.131923   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:45.143951   73732 kubeadm.go:883] updating cluster {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:10:45.144078   73732 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:10:45.144125   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:45.177770   73732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:10:45.177830   73732 ssh_runner.go:195] Run: which lz4
	I1105 19:10:45.181571   73732 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:10:45.186569   73732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:10:45.186602   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:10:46.442865   73732 crio.go:462] duration metric: took 1.26132812s to copy over tarball
	I1105 19:10:46.442959   73732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:10:44.962206   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting to get IP...
	I1105 19:10:44.963032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963397   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963492   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:44.963380   75165 retry.go:31] will retry after 274.297859ms: waiting for machine to come up
	I1105 19:10:45.239024   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239453   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.239406   75165 retry.go:31] will retry after 239.892312ms: waiting for machine to come up
	I1105 19:10:45.481036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481584   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.481569   75165 retry.go:31] will retry after 360.538082ms: waiting for machine to come up
	I1105 19:10:45.844144   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844565   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844596   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.844533   75165 retry.go:31] will retry after 387.597088ms: waiting for machine to come up
	I1105 19:10:46.234241   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234798   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.234738   75165 retry.go:31] will retry after 597.596298ms: waiting for machine to come up
	I1105 19:10:46.833721   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834170   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.834142   75165 retry.go:31] will retry after 688.240413ms: waiting for machine to come up
	I1105 19:10:47.523898   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524412   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524442   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:47.524377   75165 retry.go:31] will retry after 826.38207ms: waiting for machine to come up
	I1105 19:10:48.352258   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352787   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352809   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:48.352681   75165 retry.go:31] will retry after 1.381579847s: waiting for machine to come up
	I1105 19:10:48.547186   73732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104175993s)
	I1105 19:10:48.547221   73732 crio.go:469] duration metric: took 2.104326973s to extract the tarball
	I1105 19:10:48.547231   73732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:10:48.583027   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:48.630180   73732 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:10:48.630208   73732 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:10:48.630218   73732 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.31.2 crio true true} ...
	I1105 19:10:48.630349   73732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-271881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:10:48.630412   73732 ssh_runner.go:195] Run: crio config
	I1105 19:10:48.682182   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:48.682204   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:48.682213   73732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:10:48.682232   73732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-271881 NodeName:embed-certs-271881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:10:48.682354   73732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-271881"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:10:48.682412   73732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:10:48.691968   73732 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:10:48.692031   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:10:48.700980   73732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:10:48.716797   73732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:10:48.732408   73732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1105 19:10:48.748354   73732 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1105 19:10:48.751791   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:48.763068   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:48.893747   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:10:48.910247   73732 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881 for IP: 192.168.39.58
	I1105 19:10:48.910270   73732 certs.go:194] generating shared ca certs ...
	I1105 19:10:48.910303   73732 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:10:48.910488   73732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:10:48.910547   73732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:10:48.910561   73732 certs.go:256] generating profile certs ...
	I1105 19:10:48.910673   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/client.key
	I1105 19:10:48.910768   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key.0a454894
	I1105 19:10:48.910837   73732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key
	I1105 19:10:48.911021   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:10:48.911059   73732 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:10:48.911071   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:10:48.911116   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:10:48.911160   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:10:48.911196   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:10:48.911265   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:48.912104   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:10:48.969066   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:10:49.000713   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:10:49.040367   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:10:49.068456   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1105 19:10:49.094166   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:10:49.115986   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:10:49.137770   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:10:49.161140   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:10:49.182996   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:10:49.206578   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:10:49.230006   73732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:10:49.245835   73732 ssh_runner.go:195] Run: openssl version
	I1105 19:10:49.251252   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:10:49.261237   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265318   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265398   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.270753   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:10:49.280568   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:10:49.290580   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294567   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294644   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.299812   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:10:49.309398   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:10:49.319451   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323490   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323543   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.328708   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:10:49.338805   73732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:10:49.342918   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:10:49.348526   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:10:49.353943   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:10:49.359527   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:10:49.364886   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:10:49.370119   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:10:49.375437   73732 kubeadm.go:392] StartCluster: {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:10:49.375531   73732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:10:49.375572   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.415844   73732 cri.go:89] found id: ""
	I1105 19:10:49.415916   73732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:10:49.425336   73732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:10:49.425402   73732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:10:49.425474   73732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:10:49.434717   73732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:10:49.435831   73732 kubeconfig.go:125] found "embed-certs-271881" server: "https://192.168.39.58:8443"
	I1105 19:10:49.437903   73732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:10:49.446625   73732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I1105 19:10:49.446657   73732 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:10:49.446668   73732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:10:49.446732   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.479546   73732 cri.go:89] found id: ""
	I1105 19:10:49.479639   73732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:10:49.499034   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:10:49.510134   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:10:49.510159   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:10:49.510203   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:10:49.520482   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:10:49.520544   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:10:49.530750   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:10:49.539113   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:10:49.539183   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:10:49.548104   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.556754   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:10:49.556811   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.565606   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:10:49.574023   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:10:49.574091   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:10:49.582888   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:10:49.591876   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:49.688517   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.070191   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.38163928s)
	I1105 19:10:51.070240   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.267774   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.329051   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.406120   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:10:51.406226   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:51.907080   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:52.406468   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:49.735558   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735923   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735987   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:49.735914   75165 retry.go:31] will retry after 1.132319443s: waiting for machine to come up
	I1105 19:10:50.870267   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870770   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870801   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:50.870715   75165 retry.go:31] will retry after 1.791598796s: waiting for machine to come up
	I1105 19:10:52.664538   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665055   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:52.664912   75165 retry.go:31] will retry after 1.910294965s: waiting for machine to come up
	I1105 19:10:52.907103   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.407319   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.421763   73732 api_server.go:72] duration metric: took 2.015640262s to wait for apiserver process to appear ...
	I1105 19:10:53.421794   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:10:53.421816   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.752768   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.752803   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.752819   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.772365   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.772412   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.922705   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.928293   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:55.928329   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.422875   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.430633   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.430667   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.922156   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.934958   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.935016   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:57.422646   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:57.428784   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:10:57.435298   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:10:57.435319   73732 api_server.go:131] duration metric: took 4.013519207s to wait for apiserver health ...
	I1105 19:10:57.435327   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:57.435333   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:57.437061   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:10:57.438374   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:10:57.448509   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:10:57.465994   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:10:57.474649   73732 system_pods.go:59] 8 kube-system pods found
	I1105 19:10:57.474682   73732 system_pods.go:61] "coredns-7c65d6cfc9-nwzpq" [be8aa054-3f68-4c19-bae3-9d9cfcb51869] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:10:57.474691   73732 system_pods.go:61] "etcd-embed-certs-271881" [c37c829b-1dca-4659-b24c-4559304d9fe0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:10:57.474703   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [6df78e2a-1360-4c4b-b451-c96aa60f24ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:10:57.474710   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [95a6baca-c246-4043-acbc-235b076a89b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:10:57.474723   73732 system_pods.go:61] "kube-proxy-f945s" [2cb835f0-3727-4dd1-bd21-a21554ffdc0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 19:10:57.474730   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [53e044c5-199c-46f4-b3db-d3b65a8203aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:10:57.474741   73732 system_pods.go:61] "metrics-server-6867b74b74-vw2sm" [403d0c5f-d870-4f89-8caa-f5e9c8bf9ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:10:57.474748   73732 system_pods.go:61] "storage-provisioner" [13a89bf9-fb97-413a-9948-1c69780784cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 19:10:57.474758   73732 system_pods.go:74] duration metric: took 8.737357ms to wait for pod list to return data ...
	I1105 19:10:57.474769   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:10:57.480599   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:10:57.480623   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:10:57.480634   73732 node_conditions.go:105] duration metric: took 5.857622ms to run NodePressure ...
	I1105 19:10:57.480651   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:54.577390   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577939   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577969   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:54.577885   75165 retry.go:31] will retry after 3.393120773s: waiting for machine to come up
	I1105 19:10:57.971960   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972441   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:57.972370   75165 retry.go:31] will retry after 4.425954537s: waiting for machine to come up
	I1105 19:10:57.896717   73732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902115   73732 kubeadm.go:739] kubelet initialised
	I1105 19:10:57.902138   73732 kubeadm.go:740] duration metric: took 5.39576ms waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902152   73732 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:10:57.907293   73732 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:10:59.913946   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:02.414802   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:03.663928   74485 start.go:364] duration metric: took 3m10.909065205s to acquireMachinesLock for "old-k8s-version-567666"
	I1105 19:11:03.664023   74485 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:03.664038   74485 fix.go:54] fixHost starting: 
	I1105 19:11:03.664514   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:03.664569   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:03.682846   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I1105 19:11:03.683341   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:03.683786   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:11:03.683812   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:03.684219   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:03.684407   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:03.684552   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetState
	I1105 19:11:03.686262   74485 fix.go:112] recreateIfNeeded on old-k8s-version-567666: state=Stopped err=<nil>
	I1105 19:11:03.686295   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	W1105 19:11:03.686440   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:03.688047   74485 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-567666" ...
	I1105 19:11:02.401454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.401980   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Found IP for machine: 192.168.50.10
	I1105 19:11:02.402015   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has current primary IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.402025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserving static IP address...
	I1105 19:11:02.402384   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.402413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserved static IP address: 192.168.50.10
	I1105 19:11:02.402432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | skip adding static IP to network mk-default-k8s-diff-port-608095 - found existing host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"}
	I1105 19:11:02.402445   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for SSH to be available...
	I1105 19:11:02.402461   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Getting to WaitForSSH function...
	I1105 19:11:02.404454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404751   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.404778   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404915   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH client type: external
	I1105 19:11:02.404964   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa (-rw-------)
	I1105 19:11:02.405032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:02.405059   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | About to run SSH command:
	I1105 19:11:02.405072   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | exit 0
	I1105 19:11:02.526769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:02.527147   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetConfigRaw
	I1105 19:11:02.527756   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.530014   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530325   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.530357   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530527   74141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/config.json ...
	I1105 19:11:02.530708   74141 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:02.530728   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:02.530921   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.532868   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533184   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.533215   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533334   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.533493   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533630   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533761   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.533930   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.534116   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.534128   74141 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:02.631085   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:02.631114   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631351   74141 buildroot.go:166] provisioning hostname "default-k8s-diff-port-608095"
	I1105 19:11:02.631376   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631540   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.634037   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634371   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.634400   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634517   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.634691   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634849   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634995   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.635136   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.635310   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.635326   74141 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-608095 && echo "default-k8s-diff-port-608095" | sudo tee /etc/hostname
	I1105 19:11:02.744298   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-608095
	
	I1105 19:11:02.744327   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.747036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747348   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.747379   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747555   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.747716   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747846   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747940   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.748061   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.748266   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.748284   74141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-608095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-608095/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-608095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:02.850828   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:02.850854   74141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:02.850906   74141 buildroot.go:174] setting up certificates
	I1105 19:11:02.850923   74141 provision.go:84] configureAuth start
	I1105 19:11:02.850935   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.851260   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.853803   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854062   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.854088   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854203   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.856341   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856629   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.856659   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856747   74141 provision.go:143] copyHostCerts
	I1105 19:11:02.856804   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:02.856823   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:02.856874   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:02.856987   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:02.856997   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:02.857017   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:02.857075   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:02.857082   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:02.857100   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:02.857148   74141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-608095 san=[127.0.0.1 192.168.50.10 default-k8s-diff-port-608095 localhost minikube]
	I1105 19:11:03.048307   74141 provision.go:177] copyRemoteCerts
	I1105 19:11:03.048362   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:03.048386   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.050951   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051303   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.051353   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051556   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.051785   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.051953   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.052084   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.128441   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:03.150680   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1105 19:11:03.172480   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:03.194311   74141 provision.go:87] duration metric: took 343.374586ms to configureAuth
	I1105 19:11:03.194338   74141 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:03.194499   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:03.194560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.197209   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197585   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.197603   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197822   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.198006   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198168   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198336   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.198503   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.198686   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.198706   74141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:03.429895   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:03.429926   74141 machine.go:96] duration metric: took 899.201597ms to provisionDockerMachine
	I1105 19:11:03.429941   74141 start.go:293] postStartSetup for "default-k8s-diff-port-608095" (driver="kvm2")
	I1105 19:11:03.429955   74141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:03.429976   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.430329   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:03.430364   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.433455   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.433791   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.433820   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.434009   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.434323   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.434500   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.434659   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.514652   74141 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:03.518678   74141 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:03.518711   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:03.518774   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:03.518877   74141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:03.519014   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:03.528972   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:03.555892   74141 start.go:296] duration metric: took 125.936355ms for postStartSetup
	I1105 19:11:03.555939   74141 fix.go:56] duration metric: took 19.896481237s for fixHost
	I1105 19:11:03.555966   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.558764   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559153   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.559183   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559402   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.559610   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559788   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559933   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.560116   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.560292   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.560303   74141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:03.663723   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833863.637227261
	
	I1105 19:11:03.663751   74141 fix.go:216] guest clock: 1730833863.637227261
	I1105 19:11:03.663766   74141 fix.go:229] Guest: 2024-11-05 19:11:03.637227261 +0000 UTC Remote: 2024-11-05 19:11:03.555945261 +0000 UTC m=+239.048686257 (delta=81.282ms)
	I1105 19:11:03.663815   74141 fix.go:200] guest clock delta is within tolerance: 81.282ms
	I1105 19:11:03.663822   74141 start.go:83] releasing machines lock for "default-k8s-diff-port-608095", held for 20.004399519s
	I1105 19:11:03.663858   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.664158   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:03.666922   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667372   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.667408   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668101   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668297   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668412   74141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:03.668478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.668748   74141 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:03.668774   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.671463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671781   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.671810   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671903   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672175   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672333   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.672369   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.672417   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672578   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.672598   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672779   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.673106   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.777585   74141 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:03.783343   74141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:03.927951   74141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:03.933308   74141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:03.933380   74141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:03.948472   74141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:03.948499   74141 start.go:495] detecting cgroup driver to use...
	I1105 19:11:03.948572   74141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:03.963929   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:03.978578   74141 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:03.978643   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:03.992096   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:04.006036   74141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:04.114061   74141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:04.274136   74141 docker.go:233] disabling docker service ...
	I1105 19:11:04.274220   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:04.287806   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:04.300294   74141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:04.429899   74141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:04.576075   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:04.590934   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:04.611299   74141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:04.611375   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.623876   74141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:04.623949   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.634333   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.644768   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.654549   74141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:04.665001   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.675464   74141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.693845   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.703982   74141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:04.713758   74141 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:04.713820   74141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:04.727618   74141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:04.737679   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:04.866928   74141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:04.966529   74141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:04.966599   74141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:04.971536   74141 start.go:563] Will wait 60s for crictl version
	I1105 19:11:04.971602   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:11:04.975344   74141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:05.015910   74141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:05.015987   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.043577   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.072767   74141 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:03.689374   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .Start
	I1105 19:11:03.689560   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring networks are active...
	I1105 19:11:03.690290   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network default is active
	I1105 19:11:03.690659   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network mk-old-k8s-version-567666 is active
	I1105 19:11:03.691130   74485 main.go:141] libmachine: (old-k8s-version-567666) Getting domain xml...
	I1105 19:11:03.691890   74485 main.go:141] libmachine: (old-k8s-version-567666) Creating domain...
	I1105 19:11:05.006949   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting to get IP...
	I1105 19:11:05.008062   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.008547   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.008605   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.008523   75309 retry.go:31] will retry after 290.124771ms: waiting for machine to come up
	I1105 19:11:05.300185   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.300768   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.300803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.300717   75309 retry.go:31] will retry after 292.829683ms: waiting for machine to come up
	I1105 19:11:05.595365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.595881   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.595907   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.595831   75309 retry.go:31] will retry after 447.168257ms: waiting for machine to come up
	I1105 19:11:06.045320   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.045946   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.045976   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.045893   75309 retry.go:31] will retry after 420.272812ms: waiting for machine to come up
	I1105 19:11:06.467556   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.468012   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.468039   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.467962   75309 retry.go:31] will retry after 657.733497ms: waiting for machine to come up
	I1105 19:11:07.128022   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:07.128531   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:07.128559   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:07.128484   75309 retry.go:31] will retry after 922.664226ms: waiting for machine to come up
	I1105 19:11:04.416533   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:06.915445   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:07.417579   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:07.417610   73732 pod_ready.go:82] duration metric: took 9.510292246s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:07.417620   73732 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:05.073913   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:05.077086   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077430   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:05.077468   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077691   74141 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:05.081724   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:05.093668   74141 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:05.093785   74141 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:05.093853   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:05.128693   74141 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:05.128753   74141 ssh_runner.go:195] Run: which lz4
	I1105 19:11:05.133116   74141 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:05.137101   74141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:05.137126   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:11:06.379012   74141 crio.go:462] duration metric: took 1.245926141s to copy over tarball
	I1105 19:11:06.379088   74141 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:08.545369   74141 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.166238549s)
	I1105 19:11:08.545405   74141 crio.go:469] duration metric: took 2.166364449s to extract the tarball
	I1105 19:11:08.545422   74141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:08.581651   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:08.628768   74141 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:11:08.628795   74141 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:11:08.628805   74141 kubeadm.go:934] updating node { 192.168.50.10 8444 v1.31.2 crio true true} ...
	I1105 19:11:08.628937   74141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-608095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:08.629056   74141 ssh_runner.go:195] Run: crio config
	I1105 19:11:08.690112   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:08.690140   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:08.690152   74141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:08.690184   74141 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-608095 NodeName:default-k8s-diff-port-608095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:08.690346   74141 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-608095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:08.690415   74141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:08.700222   74141 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:08.700294   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:08.709542   74141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1105 19:11:08.725723   74141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:08.741985   74141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1105 19:11:08.758655   74141 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:08.762296   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:08.774119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:08.910000   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:08.926765   74141 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095 for IP: 192.168.50.10
	I1105 19:11:08.926788   74141 certs.go:194] generating shared ca certs ...
	I1105 19:11:08.926806   74141 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:08.927006   74141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:08.927069   74141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:08.927080   74141 certs.go:256] generating profile certs ...
	I1105 19:11:08.927157   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/client.key
	I1105 19:11:08.927229   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key.f2b96156
	I1105 19:11:08.927281   74141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key
	I1105 19:11:08.927456   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:08.927506   74141 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:08.927516   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:08.927549   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:08.927585   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:08.927620   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:08.927682   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:08.928417   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:08.971359   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:09.011632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:09.049748   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:09.078632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 19:11:09.105786   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:09.127855   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:09.151461   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:11:09.174068   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:09.196733   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:09.219111   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:09.241335   74141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:09.257040   74141 ssh_runner.go:195] Run: openssl version
	I1105 19:11:09.262371   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:09.272232   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276300   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276362   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.281747   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:09.291864   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:09.302012   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306085   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306142   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.311374   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:09.321334   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:09.331208   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335401   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335451   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.340595   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:09.350430   74141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:09.354622   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:09.360165   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:09.365624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:09.371545   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:09.377226   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:09.382624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:09.387929   74141 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:09.388032   74141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:09.388076   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.429707   74141 cri.go:89] found id: ""
	I1105 19:11:09.429783   74141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:09.440455   74141 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:09.440476   74141 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:09.440527   74141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:09.451745   74141 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:09.452609   74141 kubeconfig.go:125] found "default-k8s-diff-port-608095" server: "https://192.168.50.10:8444"
	I1105 19:11:09.454539   74141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:09.463900   74141 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.10
	I1105 19:11:09.463926   74141 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:09.463936   74141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:09.463987   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.497583   74141 cri.go:89] found id: ""
	I1105 19:11:09.497656   74141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:09.513767   74141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:09.523219   74141 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:09.523237   74141 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:09.523284   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1105 19:11:09.533116   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:09.533181   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:09.542453   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1105 19:11:08.053120   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:08.053610   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:08.053636   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:08.053587   75309 retry.go:31] will retry after 947.415519ms: waiting for machine to come up
	I1105 19:11:09.002803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:09.003423   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:09.003452   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:09.003363   75309 retry.go:31] will retry after 1.07978111s: waiting for machine to come up
	I1105 19:11:10.084404   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:10.084808   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:10.084830   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:10.084784   75309 retry.go:31] will retry after 1.482510322s: waiting for machine to come up
	I1105 19:11:11.568421   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:11.568840   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:11.568869   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:11.568791   75309 retry.go:31] will retry after 1.630983434s: waiting for machine to come up
	I1105 19:11:08.426308   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.426337   73732 pod_ready.go:82] duration metric: took 1.008708779s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.426350   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432238   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.432264   73732 pod_ready.go:82] duration metric: took 5.905051ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432276   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438187   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.438214   73732 pod_ready.go:82] duration metric: took 5.9294ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438226   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443794   73732 pod_ready.go:93] pod "kube-proxy-f945s" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.443823   73732 pod_ready.go:82] duration metric: took 5.587862ms for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443835   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:10.449498   73732 pod_ready.go:103] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:12.454934   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:12.454965   73732 pod_ready.go:82] duration metric: took 4.011121022s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:12.455003   73732 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:09.551174   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:09.551235   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:09.560481   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.571928   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:09.571997   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.583935   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1105 19:11:09.595336   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:09.595401   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:09.605061   74141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:09.613920   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:09.718759   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.680100   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.901034   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.951868   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.997866   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:10.997956   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.498113   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.998192   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.498517   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.998919   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:13.013078   74141 api_server.go:72] duration metric: took 2.01520799s to wait for apiserver process to appear ...
	I1105 19:11:13.013106   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:11:13.013136   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.042333   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.042388   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.042404   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.085574   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.085602   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.513733   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.518755   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:16.518789   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.013278   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.019214   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:17.019236   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.513886   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.519036   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:11:17.528970   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:11:17.529000   74141 api_server.go:131] duration metric: took 4.515887773s to wait for apiserver health ...
	I1105 19:11:17.529009   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:17.529016   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:17.530429   74141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:11:13.201891   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:13.202425   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:13.202453   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:13.202387   75309 retry.go:31] will retry after 2.689744765s: waiting for machine to come up
	I1105 19:11:15.893632   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:15.893989   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:15.894034   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:15.893964   75309 retry.go:31] will retry after 2.460566804s: waiting for machine to come up
	I1105 19:11:14.465748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:16.961287   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:17.531600   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:11:17.544876   74141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:11:17.567835   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:11:17.583925   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:11:17.583976   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:11:17.583988   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:11:17.583999   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:11:17.584015   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:11:17.584027   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:11:17.584041   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:11:17.584052   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:11:17.584060   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:11:17.584068   74141 system_pods.go:74] duration metric: took 16.206948ms to wait for pod list to return data ...
	I1105 19:11:17.584081   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:11:17.593935   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:11:17.593960   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:11:17.593971   74141 node_conditions.go:105] duration metric: took 9.883295ms to run NodePressure ...
	I1105 19:11:17.593988   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:17.929181   74141 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933853   74141 kubeadm.go:739] kubelet initialised
	I1105 19:11:17.933879   74141 kubeadm.go:740] duration metric: took 4.667992ms waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933888   74141 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:17.940560   74141 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.952799   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952832   74141 pod_ready.go:82] duration metric: took 12.240861ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.952845   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952856   74141 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.959079   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959105   74141 pod_ready.go:82] duration metric: took 6.23649ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.959119   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959130   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.963797   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963817   74141 pod_ready.go:82] duration metric: took 4.681011ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.963830   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963837   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.970915   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970935   74141 pod_ready.go:82] duration metric: took 7.091116ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.970945   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970951   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.371478   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371503   74141 pod_ready.go:82] duration metric: took 400.5454ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.371512   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371519   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.771731   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771768   74141 pod_ready.go:82] duration metric: took 400.239012ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.771783   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771792   74141 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:19.171239   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171271   74141 pod_ready.go:82] duration metric: took 399.46983ms for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:19.171286   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171296   74141 pod_ready.go:39] duration metric: took 1.237397637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:19.171315   74141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:11:19.185845   74141 ops.go:34] apiserver oom_adj: -16
	I1105 19:11:19.185869   74141 kubeadm.go:597] duration metric: took 9.745385943s to restartPrimaryControlPlane
	I1105 19:11:19.185880   74141 kubeadm.go:394] duration metric: took 9.797958845s to StartCluster
	I1105 19:11:19.185901   74141 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.185989   74141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:19.187722   74141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.187971   74141 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:11:19.188036   74141 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:11:19.188142   74141 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188160   74141 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-608095"
	I1105 19:11:19.188159   74141 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-608095"
	W1105 19:11:19.188171   74141 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:11:19.188199   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188236   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:19.188248   74141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-608095"
	I1105 19:11:19.188273   74141 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188310   74141 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.188323   74141 addons.go:243] addon metrics-server should already be in state true
	I1105 19:11:19.188379   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188526   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188569   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188674   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188725   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188802   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188823   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.189792   74141 out.go:177] * Verifying Kubernetes components...
	I1105 19:11:19.191119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:19.203875   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I1105 19:11:19.204313   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.204803   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.204830   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.205083   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I1105 19:11:19.205175   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.205432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.205488   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.205973   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.205999   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.206357   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.206916   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.206955   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.207292   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I1105 19:11:19.207671   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.208122   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.208146   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.208484   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.208861   74141 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.208882   74141 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:11:19.208909   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.209004   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209045   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.209234   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209273   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.223963   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I1105 19:11:19.224405   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.225044   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.225074   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.225460   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.226141   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I1105 19:11:19.226463   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.226509   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.226577   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.226757   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I1105 19:11:19.227058   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.227081   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.227475   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.227558   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.227797   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.228116   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.228136   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.228530   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.228755   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.229870   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.230471   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.232239   74141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:19.232263   74141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:11:19.233508   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:11:19.233527   74141 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:11:19.233548   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.233607   74141 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.233626   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:11:19.233647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.237337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237365   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237895   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237928   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237958   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237972   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.238155   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238270   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238440   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238623   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238681   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.239040   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.243685   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1105 19:11:19.244073   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.244584   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.244602   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.244951   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.245112   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.246617   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.246814   74141 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.246830   74141 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:11:19.246845   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.249467   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.249896   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.249925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.250139   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.250317   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.250466   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.250636   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.396917   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:19.412224   74141 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:19.541493   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.566934   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:11:19.566982   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:11:19.567627   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.607685   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:11:19.607717   74141 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:11:19.640921   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:19.640959   74141 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:11:19.674550   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:20.091222   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091248   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091528   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091583   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091596   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091605   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091807   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091868   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091853   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.105073   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.105093   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.105426   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.105442   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719139   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.151476995s)
	I1105 19:11:20.719187   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719194   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.044605505s)
	I1105 19:11:20.719236   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719256   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719511   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719582   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719593   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719596   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719631   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719580   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719643   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719654   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719670   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719680   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719897   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719946   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719948   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719903   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719982   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719990   74141 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-608095"
	I1105 19:11:20.719927   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.721843   74141 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1105 19:11:22.583507   73496 start.go:364] duration metric: took 54.335724939s to acquireMachinesLock for "no-preload-459223"
	I1105 19:11:22.583581   73496 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:22.583590   73496 fix.go:54] fixHost starting: 
	I1105 19:11:22.584018   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:22.584054   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:22.603921   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I1105 19:11:22.604367   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:22.604825   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:11:22.604845   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:22.605233   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:22.605408   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:22.605534   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:11:22.607289   73496 fix.go:112] recreateIfNeeded on no-preload-459223: state=Stopped err=<nil>
	I1105 19:11:22.607314   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	W1105 19:11:22.607458   73496 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:22.609455   73496 out.go:177] * Restarting existing kvm2 VM for "no-preload-459223" ...
	I1105 19:11:18.357643   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:18.358065   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:18.358099   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:18.358009   75309 retry.go:31] will retry after 3.036834524s: waiting for machine to come up
	I1105 19:11:21.398221   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398763   74485 main.go:141] libmachine: (old-k8s-version-567666) Found IP for machine: 192.168.61.125
	I1105 19:11:21.398825   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has current primary IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398843   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserving static IP address...
	I1105 19:11:21.399327   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.399350   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserved static IP address: 192.168.61.125
	I1105 19:11:21.399365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | skip adding static IP to network mk-old-k8s-version-567666 - found existing host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"}
	I1105 19:11:21.399379   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Getting to WaitForSSH function...
	I1105 19:11:21.399394   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting for SSH to be available...
	I1105 19:11:21.401270   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401664   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.401691   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401866   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH client type: external
	I1105 19:11:21.401897   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa (-rw-------)
	I1105 19:11:21.401935   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:21.401949   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | About to run SSH command:
	I1105 19:11:21.401959   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | exit 0
	I1105 19:11:21.527815   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:21.528165   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:11:21.528874   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.531373   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531647   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.531672   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531876   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:11:21.532071   74485 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:21.532092   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:21.532332   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.534177   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534431   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.534465   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534556   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.534716   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534845   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534960   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.535142   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.535329   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.535341   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:21.643321   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:21.643354   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643618   74485 buildroot.go:166] provisioning hostname "old-k8s-version-567666"
	I1105 19:11:21.643646   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643812   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.646230   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646628   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.646666   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.647037   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647167   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647290   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.647421   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.647579   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.647592   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-567666 && echo "old-k8s-version-567666" | sudo tee /etc/hostname
	I1105 19:11:21.770209   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-567666
	
	I1105 19:11:21.770255   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.772932   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773314   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.773346   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773484   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.773691   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773950   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.774121   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.774357   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.774386   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-567666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-567666/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-567666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:21.890834   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:21.890860   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:21.890915   74485 buildroot.go:174] setting up certificates
	I1105 19:11:21.890929   74485 provision.go:84] configureAuth start
	I1105 19:11:21.890944   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.891224   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.893835   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894256   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.894285   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.896436   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896699   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.896715   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896893   74485 provision.go:143] copyHostCerts
	I1105 19:11:21.896951   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:21.896967   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:21.897037   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:21.897163   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:21.897176   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:21.897205   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:21.897279   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:21.897289   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:21.897315   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:21.897396   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-567666 san=[127.0.0.1 192.168.61.125 localhost minikube old-k8s-version-567666]
	I1105 19:11:21.962153   74485 provision.go:177] copyRemoteCerts
	I1105 19:11:21.962219   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:21.962257   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.964765   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965125   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.965166   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965330   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.965478   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.965603   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.965746   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.048519   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:22.072975   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1105 19:11:22.098263   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:22.120258   74485 provision.go:87] duration metric: took 229.316972ms to configureAuth
	I1105 19:11:22.120285   74485 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:22.120444   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:11:22.120516   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.123859   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124309   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.124344   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124536   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.124737   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.124922   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.125055   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.125213   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.125375   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.125388   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:22.349922   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:22.349964   74485 machine.go:96] duration metric: took 817.87332ms to provisionDockerMachine
	I1105 19:11:22.349979   74485 start.go:293] postStartSetup for "old-k8s-version-567666" (driver="kvm2")
	I1105 19:11:22.349992   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:22.350014   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.350350   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:22.350385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.352922   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353310   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.353332   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353459   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.353638   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.353807   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.353921   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.437482   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:22.441617   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:22.441646   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:22.441711   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:22.441807   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:22.441929   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:22.451016   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:22.474199   74485 start.go:296] duration metric: took 124.207336ms for postStartSetup
	I1105 19:11:22.474233   74485 fix.go:56] duration metric: took 18.810197154s for fixHost
	I1105 19:11:22.474269   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.476786   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477119   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.477157   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477279   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.477471   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477621   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477753   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.477910   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.478070   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.478081   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:22.583343   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833882.558222038
	
	I1105 19:11:22.583363   74485 fix.go:216] guest clock: 1730833882.558222038
	I1105 19:11:22.583372   74485 fix.go:229] Guest: 2024-11-05 19:11:22.558222038 +0000 UTC Remote: 2024-11-05 19:11:22.474236871 +0000 UTC m=+209.862783450 (delta=83.985167ms)
	I1105 19:11:22.583418   74485 fix.go:200] guest clock delta is within tolerance: 83.985167ms
	I1105 19:11:22.583429   74485 start.go:83] releasing machines lock for "old-k8s-version-567666", held for 18.919444623s
	I1105 19:11:22.583460   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.583717   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:22.586183   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586479   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.586509   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586687   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587137   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587310   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587400   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:22.587448   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.587521   74485 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:22.587548   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.590145   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590474   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.590507   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590530   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590655   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.590831   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.590995   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.591010   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591037   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.591179   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.591286   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.591438   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.591558   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591702   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:19.461723   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:21.962582   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:22.702707   74485 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:22.708965   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:22.856764   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:22.863791   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:22.863866   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:22.883997   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:22.884022   74485 start.go:495] detecting cgroup driver to use...
	I1105 19:11:22.884094   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:22.901499   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:22.919358   74485 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:22.919422   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:22.936964   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:22.953538   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:23.077720   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:23.218316   74485 docker.go:233] disabling docker service ...
	I1105 19:11:23.218390   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:23.238316   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:23.251814   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:23.427386   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:23.552928   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:23.567149   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:23.587241   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1105 19:11:23.587307   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.597558   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:23.597620   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.607466   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.616794   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.626425   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:23.637121   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:23.649243   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:23.649305   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:23.664648   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:23.675060   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:23.812636   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:23.903326   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:23.903404   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:23.908377   74485 start.go:563] Will wait 60s for crictl version
	I1105 19:11:23.908434   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:23.912163   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:23.961712   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:23.961794   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:23.992951   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:24.032041   74485 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1105 19:11:20.723316   74141 addons.go:510] duration metric: took 1.53528546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1105 19:11:21.416385   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:23.416458   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:22.610737   73496 main.go:141] libmachine: (no-preload-459223) Calling .Start
	I1105 19:11:22.610910   73496 main.go:141] libmachine: (no-preload-459223) Ensuring networks are active...
	I1105 19:11:22.611680   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network default is active
	I1105 19:11:22.612057   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network mk-no-preload-459223 is active
	I1105 19:11:22.612426   73496 main.go:141] libmachine: (no-preload-459223) Getting domain xml...
	I1105 19:11:22.613081   73496 main.go:141] libmachine: (no-preload-459223) Creating domain...
	I1105 19:11:24.013821   73496 main.go:141] libmachine: (no-preload-459223) Waiting to get IP...
	I1105 19:11:24.014922   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.015467   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.015561   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.015439   75501 retry.go:31] will retry after 233.461829ms: waiting for machine to come up
	I1105 19:11:24.251339   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.252673   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.252799   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.252760   75501 retry.go:31] will retry after 276.401207ms: waiting for machine to come up
	I1105 19:11:24.531408   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.531964   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.531987   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.531909   75501 retry.go:31] will retry after 367.69826ms: waiting for machine to come up
	I1105 19:11:24.901179   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.901579   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.901608   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.901536   75501 retry.go:31] will retry after 602.654501ms: waiting for machine to come up
	I1105 19:11:25.505889   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:25.506403   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:25.506426   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:25.506364   75501 retry.go:31] will retry after 492.077165ms: waiting for machine to come up
	I1105 19:11:24.033400   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:24.036549   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037128   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:24.037165   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037346   74485 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:24.042641   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:24.055174   74485 kubeadm.go:883] updating cluster {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:24.055327   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:11:24.055388   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:24.101655   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:24.101724   74485 ssh_runner.go:195] Run: which lz4
	I1105 19:11:24.105618   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:24.109705   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:24.109735   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1105 19:11:25.602158   74485 crio.go:462] duration metric: took 1.496564307s to copy over tarball
	I1105 19:11:25.602236   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:23.963218   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:26.461963   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:25.419351   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:26.916693   74141 node_ready.go:49] node "default-k8s-diff-port-608095" has status "Ready":"True"
	I1105 19:11:26.916731   74141 node_ready.go:38] duration metric: took 7.50447744s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:26.916744   74141 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:26.922179   74141 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927845   74141 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.927879   74141 pod_ready.go:82] duration metric: took 5.666725ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927892   74141 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932723   74141 pod_ready.go:93] pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.932752   74141 pod_ready.go:82] duration metric: took 4.843531ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932761   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937108   74141 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.937137   74141 pod_ready.go:82] duration metric: took 4.368536ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937152   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.941970   74141 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.941995   74141 pod_ready.go:82] duration metric: took 4.833418ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.942008   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317480   74141 pod_ready.go:93] pod "kube-proxy-8v42c" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.317505   74141 pod_ready.go:82] duration metric: took 375.489077ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317517   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717923   74141 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.717945   74141 pod_ready.go:82] duration metric: took 400.42059ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717956   74141 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.000041   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.000558   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.000613   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.000525   75501 retry.go:31] will retry after 920.198126ms: waiting for machine to come up
	I1105 19:11:26.922134   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.922917   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.922951   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.922858   75501 retry.go:31] will retry after 1.071853506s: waiting for machine to come up
	I1105 19:11:27.996574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:27.996995   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:27.997020   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:27.996949   75501 retry.go:31] will retry after 1.283200825s: waiting for machine to come up
	I1105 19:11:29.282457   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:29.282942   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:29.282979   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:29.282903   75501 retry.go:31] will retry after 1.512809658s: waiting for machine to come up
	I1105 19:11:28.701223   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.098952901s)
	I1105 19:11:28.701253   74485 crio.go:469] duration metric: took 3.099065633s to extract the tarball
	I1105 19:11:28.701263   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:28.744214   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:28.778845   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:28.778868   74485 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:28.778962   74485 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:28.778945   74485 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.779024   74485 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.779039   74485 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.778939   74485 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.779067   74485 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.779083   74485 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.778957   74485 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781024   74485 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781003   74485 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.781052   74485 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.781002   74485 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.781088   74485 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.781114   74485 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.013637   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1105 19:11:29.043928   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.043936   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.044140   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.045892   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.046313   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.055792   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.081724   74485 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1105 19:11:29.081779   74485 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1105 19:11:29.081826   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.234925   74485 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1105 19:11:29.234966   74485 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.235046   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235079   74485 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1105 19:11:29.235112   74485 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.235136   74485 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1105 19:11:29.235152   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235167   74485 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.235200   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235238   74485 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1105 19:11:29.235277   74485 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.235298   74485 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1105 19:11:29.235320   74485 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.235333   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235352   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235351   74485 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1105 19:11:29.235385   74485 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.235415   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235426   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.251873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.251960   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.251985   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.252000   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.371298   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.415548   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.415592   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.415654   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.415710   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.415791   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.415868   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.466873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.544593   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.544660   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.586695   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.586714   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.586812   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.586916   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.606582   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1105 19:11:29.707767   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1105 19:11:29.707803   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1105 19:11:29.716195   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1105 19:11:29.723097   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1105 19:11:30.039971   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:30.182760   74485 cache_images.go:92] duration metric: took 1.403874987s to LoadCachedImages
	W1105 19:11:30.182890   74485 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1105 19:11:30.182912   74485 kubeadm.go:934] updating node { 192.168.61.125 8443 v1.20.0 crio true true} ...
	I1105 19:11:30.183052   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-567666 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:30.183146   74485 ssh_runner.go:195] Run: crio config
	I1105 19:11:30.235206   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:11:30.235241   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:30.235253   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:30.235277   74485 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-567666 NodeName:old-k8s-version-567666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1105 19:11:30.235433   74485 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-567666"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:30.235503   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1105 19:11:30.245189   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:30.245263   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:30.254772   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1105 19:11:30.271711   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:30.288568   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1105 19:11:30.309098   74485 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:30.313211   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:30.325637   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:30.447346   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:30.466863   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666 for IP: 192.168.61.125
	I1105 19:11:30.466884   74485 certs.go:194] generating shared ca certs ...
	I1105 19:11:30.466898   74485 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:30.467086   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:30.467152   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:30.467165   74485 certs.go:256] generating profile certs ...
	I1105 19:11:30.467322   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key
	I1105 19:11:30.467398   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8
	I1105 19:11:30.467448   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key
	I1105 19:11:30.467614   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:30.467656   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:30.467676   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:30.467722   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:30.467759   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:30.467788   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:30.467847   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:30.468756   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:30.532325   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:30.559936   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:30.592995   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:30.632421   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 19:11:30.662285   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:11:30.696292   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:30.725642   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:30.750231   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:30.773213   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:30.796269   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:30.820261   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:30.837059   74485 ssh_runner.go:195] Run: openssl version
	I1105 19:11:30.842937   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:30.855033   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859637   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859720   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.865747   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:30.877678   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:30.890762   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895576   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895642   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.901686   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:30.912689   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:30.923800   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928911   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928984   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.934782   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:30.947059   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:30.951934   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:30.958065   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:30.965341   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:30.971725   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:30.977606   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:30.983486   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:30.989212   74485 kubeadm.go:392] StartCluster: {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:30.989350   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:30.989411   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.031794   74485 cri.go:89] found id: ""
	I1105 19:11:31.031884   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:31.043178   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:31.043202   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:31.043291   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:31.054102   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:31.055256   74485 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:31.055924   74485 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-567666" cluster setting kubeconfig missing "old-k8s-version-567666" context setting]
	I1105 19:11:31.056913   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:31.064220   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:31.074582   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.125
	I1105 19:11:31.074618   74485 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:31.074628   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:31.074706   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.111157   74485 cri.go:89] found id: ""
	I1105 19:11:31.111241   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:31.130027   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:31.139917   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:31.139939   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:31.140007   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:31.150790   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:31.150868   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:31.161397   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:31.170394   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:31.170462   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:31.179594   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.188892   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:31.188952   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.199840   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:31.209166   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:31.209244   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:31.219687   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:31.231079   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:31.350667   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.094565   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.334807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.457538   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.534503   74485 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:32.534596   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:28.464017   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.962422   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:29.725325   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:32.225372   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.796963   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:30.797438   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:30.797489   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:30.797407   75501 retry.go:31] will retry after 1.774832047s: waiting for machine to come up
	I1105 19:11:32.574423   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:32.575000   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:32.575047   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:32.574929   75501 retry.go:31] will retry after 2.041093372s: waiting for machine to come up
	I1105 19:11:34.618469   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:34.618954   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:34.619015   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:34.618915   75501 retry.go:31] will retry after 2.731949113s: waiting for machine to come up
	I1105 19:11:33.034690   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:33.535594   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.035526   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.534836   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.034947   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.535108   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.035417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.535438   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.034766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.535415   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:32.962469   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.963093   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.461010   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.724484   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.224511   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.352209   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:37.352752   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:37.352783   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:37.352686   75501 retry.go:31] will retry after 3.62202055s: waiting for machine to come up
	I1105 19:11:38.035553   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:38.534702   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.035332   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.534749   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.034989   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.535354   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.035624   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.534847   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.035293   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.535363   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.465635   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:41.961348   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:40.978791   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979231   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has current primary IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979249   73496 main.go:141] libmachine: (no-preload-459223) Found IP for machine: 192.168.72.101
	I1105 19:11:40.979258   73496 main.go:141] libmachine: (no-preload-459223) Reserving static IP address...
	I1105 19:11:40.979621   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.979650   73496 main.go:141] libmachine: (no-preload-459223) Reserved static IP address: 192.168.72.101
	I1105 19:11:40.979669   73496 main.go:141] libmachine: (no-preload-459223) DBG | skip adding static IP to network mk-no-preload-459223 - found existing host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"}
	I1105 19:11:40.979682   73496 main.go:141] libmachine: (no-preload-459223) Waiting for SSH to be available...
	I1105 19:11:40.979710   73496 main.go:141] libmachine: (no-preload-459223) DBG | Getting to WaitForSSH function...
	I1105 19:11:40.981725   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.982063   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982202   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH client type: external
	I1105 19:11:40.982227   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa (-rw-------)
	I1105 19:11:40.982258   73496 main.go:141] libmachine: (no-preload-459223) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:40.982286   73496 main.go:141] libmachine: (no-preload-459223) DBG | About to run SSH command:
	I1105 19:11:40.982310   73496 main.go:141] libmachine: (no-preload-459223) DBG | exit 0
	I1105 19:11:41.111259   73496 main.go:141] libmachine: (no-preload-459223) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:41.111639   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetConfigRaw
	I1105 19:11:41.112368   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.114811   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115215   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.115244   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115499   73496 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/config.json ...
	I1105 19:11:41.115687   73496 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:41.115705   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:41.115900   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.118059   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118481   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.118505   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118659   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.118833   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.118959   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.119078   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.119222   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.119426   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.119442   73496 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:41.235030   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:41.235060   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235270   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:11:41.235294   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235480   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.237980   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238288   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.238327   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238405   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.238567   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238687   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238805   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.238938   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.239150   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.239163   73496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-459223 && echo "no-preload-459223" | sudo tee /etc/hostname
	I1105 19:11:41.366664   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-459223
	
	I1105 19:11:41.366693   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.369672   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.369979   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.370006   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.370147   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.370335   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370661   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.370830   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.371067   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.371086   73496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-459223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-459223/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-459223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:41.495741   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:41.495774   73496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:41.495796   73496 buildroot.go:174] setting up certificates
	I1105 19:11:41.495804   73496 provision.go:84] configureAuth start
	I1105 19:11:41.495816   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.496076   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.498948   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499377   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.499409   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499552   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.501842   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502168   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.502198   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502367   73496 provision.go:143] copyHostCerts
	I1105 19:11:41.502428   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:41.502445   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:41.502516   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:41.502662   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:41.502674   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:41.502706   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:41.502814   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:41.502825   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:41.502853   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:41.502934   73496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.no-preload-459223 san=[127.0.0.1 192.168.72.101 localhost minikube no-preload-459223]
	I1105 19:11:41.648058   73496 provision.go:177] copyRemoteCerts
	I1105 19:11:41.648115   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:41.648137   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.650915   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651274   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.651306   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.651707   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.651878   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.652032   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:41.736549   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1105 19:11:41.759352   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:41.782205   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:41.804725   73496 provision.go:87] duration metric: took 308.906806ms to configureAuth
	I1105 19:11:41.804755   73496 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:41.804930   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:41.805011   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.807634   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.808071   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.808498   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808657   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808792   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.808960   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.809113   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.809125   73496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:42.033406   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:42.033449   73496 machine.go:96] duration metric: took 917.749182ms to provisionDockerMachine
	I1105 19:11:42.033462   73496 start.go:293] postStartSetup for "no-preload-459223" (driver="kvm2")
	I1105 19:11:42.033475   73496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:42.033506   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.033853   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:42.033883   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.037259   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037688   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.037722   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037869   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.038063   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.038231   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.038361   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.126624   73496 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:42.130761   73496 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:42.130794   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:42.130881   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:42.131006   73496 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:42.131120   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:42.140978   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:42.163880   73496 start.go:296] duration metric: took 130.405487ms for postStartSetup
	I1105 19:11:42.163933   73496 fix.go:56] duration metric: took 19.580327925s for fixHost
	I1105 19:11:42.163953   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.166648   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.166994   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.167025   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.167196   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.167394   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167565   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167705   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.167856   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:42.168016   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:42.168025   73496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:42.279303   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833902.251467447
	
	I1105 19:11:42.279336   73496 fix.go:216] guest clock: 1730833902.251467447
	I1105 19:11:42.279351   73496 fix.go:229] Guest: 2024-11-05 19:11:42.251467447 +0000 UTC Remote: 2024-11-05 19:11:42.163937292 +0000 UTC m=+356.505256250 (delta=87.530155ms)
	I1105 19:11:42.279378   73496 fix.go:200] guest clock delta is within tolerance: 87.530155ms
	I1105 19:11:42.279387   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 19.695831159s
	I1105 19:11:42.279417   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.279660   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:42.282462   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.282828   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.282871   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.283018   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283439   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283580   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283669   73496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:42.283716   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.283811   73496 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:42.283838   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.286528   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286754   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286891   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.286917   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287097   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.287112   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287124   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287313   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287495   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287510   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287666   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287664   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.287769   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.398511   73496 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:42.404337   73496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:42.550196   73496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:42.555775   73496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:42.555853   73496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:42.571003   73496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:42.571031   73496 start.go:495] detecting cgroup driver to use...
	I1105 19:11:42.571123   73496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:42.586390   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:42.599887   73496 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:42.599944   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:42.613260   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:42.626371   73496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:42.736949   73496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:42.898897   73496 docker.go:233] disabling docker service ...
	I1105 19:11:42.898965   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:42.912534   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:42.925075   73496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:43.043425   73496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:43.175468   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:43.190803   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:43.210413   73496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:43.210496   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.221971   73496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:43.222064   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.232251   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.241540   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.251131   73496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:43.261218   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.270932   73496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.287905   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.297730   73496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:43.307263   73496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:43.307319   73496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:43.319421   73496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:43.328415   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:43.445798   73496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:43.532190   73496 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:43.532284   73496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:43.536931   73496 start.go:563] Will wait 60s for crictl version
	I1105 19:11:43.536986   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.540525   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:43.576428   73496 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:43.576540   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.603034   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.631229   73496 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:39.724162   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:42.224141   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:44.224609   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:43.632482   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:43.634912   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635227   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:43.635260   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635530   73496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:43.639287   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:43.650818   73496 kubeadm.go:883] updating cluster {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:43.650963   73496 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:43.651042   73496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:43.685392   73496 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:43.685421   73496 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:43.685492   73496 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.685500   73496 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.685517   73496 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.685547   73496 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.685506   73496 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.685569   73496 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.685558   73496 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.685623   73496 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.686958   73496 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.686979   73496 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.686976   73496 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.687017   73496 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.687030   73496 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.687057   73496 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1105 19:11:43.898928   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.914069   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1105 19:11:43.934388   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.940664   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.947392   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.951614   73496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1105 19:11:43.951652   73496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.951686   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.957000   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.045057   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.075256   73496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1105 19:11:44.075289   73496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1105 19:11:44.075304   73496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.075310   73496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075357   73496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1105 19:11:44.075388   73496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075417   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.075481   73496 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1105 19:11:44.075431   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075511   73496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.075543   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.102803   73496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1105 19:11:44.102856   73496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.102916   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.133582   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.133640   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.133655   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.133707   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.188042   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.188058   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.272464   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.272500   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.272467   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.272531   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.289003   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.289126   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.411162   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1105 19:11:44.411248   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.411307   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1105 19:11:44.411326   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:44.411361   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1105 19:11:44.411394   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:44.411432   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478064   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1105 19:11:44.478093   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478132   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1105 19:11:44.478152   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478178   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1105 19:11:44.478195   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1105 19:11:44.478211   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1105 19:11:44.478226   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:44.478249   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1105 19:11:44.478257   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:44.478324   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:44.889847   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.035199   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.534769   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.035551   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.535664   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.035103   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.535581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.035077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.535660   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.035462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.534898   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.962742   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.462884   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.724058   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:48.727054   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.976315   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.498135546s)
	I1105 19:11:46.976348   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1105 19:11:46.976361   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.498084867s)
	I1105 19:11:46.976386   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.498096252s)
	I1105 19:11:46.976392   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.498054417s)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1105 19:11:46.976395   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1105 19:11:46.976368   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976436   73496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.086553002s)
	I1105 19:11:46.976471   73496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1105 19:11:46.976488   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976506   73496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:46.976551   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:49.054369   73496 ssh_runner.go:235] Completed: which crictl: (2.077794607s)
	I1105 19:11:49.054455   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:49.054480   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.077976168s)
	I1105 19:11:49.054497   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1105 19:11:49.054520   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.054551   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.089648   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.509600   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455021031s)
	I1105 19:11:50.509639   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1105 19:11:50.509664   73496 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509679   73496 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.419997127s)
	I1105 19:11:50.509719   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509751   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.547301   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1105 19:11:50.547416   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:48.035320   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.535496   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.035636   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.535445   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.035499   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.535722   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.035700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.535310   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.035585   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.535468   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.962134   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.463479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.225155   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:53.723881   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:54.139987   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.592545704s)
	I1105 19:11:54.140021   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1105 19:11:54.140038   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.630297093s)
	I1105 19:11:54.140058   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1105 19:11:54.140089   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:54.140150   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:53.034919   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.535697   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.035353   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.534669   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.034957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.534747   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.035331   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.534699   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.465549   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.961291   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.725153   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:58.224417   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.887208   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.747032149s)
	I1105 19:11:55.887247   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1105 19:11:55.887278   73496 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:55.887331   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:57.753834   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.866475995s)
	I1105 19:11:57.753860   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1105 19:11:57.753879   73496 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:57.753917   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:58.605444   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1105 19:11:58.605490   73496 cache_images.go:123] Successfully loaded all cached images
	I1105 19:11:58.605498   73496 cache_images.go:92] duration metric: took 14.920064519s to LoadCachedImages
	I1105 19:11:58.605512   73496 kubeadm.go:934] updating node { 192.168.72.101 8443 v1.31.2 crio true true} ...
	I1105 19:11:58.605627   73496 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-459223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:58.605719   73496 ssh_runner.go:195] Run: crio config
	I1105 19:11:58.654396   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:11:58.654422   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:58.654432   73496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:58.654456   73496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.101 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-459223 NodeName:no-preload-459223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:58.654636   73496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-459223"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.101"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.101"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:58.654714   73496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:58.666580   73496 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:58.666659   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:58.676390   73496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:11:58.692426   73496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:58.708650   73496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1105 19:11:58.727451   73496 ssh_runner.go:195] Run: grep 192.168.72.101	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:58.731200   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:58.743437   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:58.850614   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:58.867662   73496 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223 for IP: 192.168.72.101
	I1105 19:11:58.867694   73496 certs.go:194] generating shared ca certs ...
	I1105 19:11:58.867715   73496 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:58.867896   73496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:58.867954   73496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:58.867988   73496 certs.go:256] generating profile certs ...
	I1105 19:11:58.868073   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/client.key
	I1105 19:11:58.868129   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key.0f61fe1e
	I1105 19:11:58.868163   73496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key
	I1105 19:11:58.868276   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:58.868316   73496 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:58.868323   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:58.868347   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:58.868380   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:58.868409   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:58.868450   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:58.869179   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:58.911433   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:58.947863   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:58.977511   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:59.022637   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 19:11:59.060992   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:59.086516   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:59.109616   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:59.135019   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:59.159832   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:59.184470   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:59.207138   73496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:59.224379   73496 ssh_runner.go:195] Run: openssl version
	I1105 19:11:59.230142   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:59.243624   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248086   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248157   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.253684   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:59.264169   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:59.274837   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279102   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279159   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.284540   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:59.295198   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:59.306105   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310073   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310115   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.315240   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:59.325470   73496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:59.329485   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:59.334985   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:59.340316   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:59.345717   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:59.351082   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:59.356631   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:59.361951   73496 kubeadm.go:392] StartCluster: {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:59.362047   73496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:59.362084   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.398746   73496 cri.go:89] found id: ""
	I1105 19:11:59.398819   73496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:59.408597   73496 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:59.408614   73496 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:59.408656   73496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:59.418082   73496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:59.419128   73496 kubeconfig.go:125] found "no-preload-459223" server: "https://192.168.72.101:8443"
	I1105 19:11:59.421286   73496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:59.430458   73496 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.101
	I1105 19:11:59.430490   73496 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:59.430500   73496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:59.430549   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.464047   73496 cri.go:89] found id: ""
	I1105 19:11:59.464102   73496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:59.480978   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:59.490808   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:59.490829   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:59.490871   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:59.499505   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:59.499559   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:59.508247   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:59.516942   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:59.517005   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:59.525910   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.534349   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:59.534392   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.544212   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:59.553794   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:59.553857   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:59.562739   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:59.571819   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:59.680938   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.564659   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:58.034948   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:58.534748   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.034961   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.535634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.035311   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.534756   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.035266   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.535256   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.035489   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.534701   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.963075   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.462112   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.224544   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:02.225623   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.226711   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.775338   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.844402   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.957534   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:12:00.957630   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.458375   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.958215   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.975834   73496 api_server.go:72] duration metric: took 1.018298528s to wait for apiserver process to appear ...
	I1105 19:12:01.975862   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:12:01.975884   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.774116   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.774149   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.774164   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.825378   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.825427   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.976663   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.984209   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:04.984244   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.476825   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.484608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.484644   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.975985   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.981608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.981639   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:06.476014   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:06.480296   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:12:06.487584   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:12:06.487613   73496 api_server.go:131] duration metric: took 4.511744097s to wait for apiserver health ...
	I1105 19:12:06.487623   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:12:06.487632   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:12:06.489302   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:12:03.034795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:03.534764   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.034833   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.534795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.034815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.534885   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.535327   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.035253   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.535011   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.961693   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.962003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:07.461125   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.724362   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:09.224191   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.490496   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:12:06.500809   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:12:06.529242   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:12:06.542769   73496 system_pods.go:59] 8 kube-system pods found
	I1105 19:12:06.542806   73496 system_pods.go:61] "coredns-7c65d6cfc9-9vvhj" [fde1a6e7-6807-440c-a38d-4f39ede6c11e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:12:06.542818   73496 system_pods.go:61] "etcd-no-preload-459223" [398e3fc3-6902-4cbb-bc50-a72bab461839] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:12:06.542828   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [33a306b0-a41d-4ca3-9d01-69faa7825fe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:12:06.542837   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [865ae24c-d991-4650-9e17-7242f84403e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:12:06.542844   73496 system_pods.go:61] "kube-proxy-6h584" [dd35774f-a245-42af-8fe9-bd6933ad0e30] Running
	I1105 19:12:06.542852   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [27d3685e-d548-49b6-a24d-02b1f8656c66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:12:06.542859   73496 system_pods.go:61] "metrics-server-6867b74b74-5sp2j" [7ddaa66e-b4ba-4241-8dba-5fc6ab66d777] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:12:06.542864   73496 system_pods.go:61] "storage-provisioner" [49786ba3-e9fc-45ad-9418-fd3a0a7b652c] Running
	I1105 19:12:06.542873   73496 system_pods.go:74] duration metric: took 13.603868ms to wait for pod list to return data ...
	I1105 19:12:06.542883   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:12:06.549398   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:12:06.549425   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:12:06.549435   73496 node_conditions.go:105] duration metric: took 6.546615ms to run NodePressure ...
	I1105 19:12:06.549452   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:06.812829   73496 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818052   73496 kubeadm.go:739] kubelet initialised
	I1105 19:12:06.818082   73496 kubeadm.go:740] duration metric: took 5.227942ms waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818093   73496 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:12:06.823883   73496 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.830129   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830164   73496 pod_ready.go:82] duration metric: took 6.253499ms for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.830176   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830187   73496 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.834901   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834942   73496 pod_ready.go:82] duration metric: took 4.743456ms for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.834954   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834988   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.841446   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841474   73496 pod_ready.go:82] duration metric: took 6.472942ms for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.841485   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841494   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.933972   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.933998   73496 pod_ready.go:82] duration metric: took 92.493084ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.934006   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.934012   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333443   73496 pod_ready.go:93] pod "kube-proxy-6h584" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:07.333473   73496 pod_ready.go:82] duration metric: took 399.45278ms for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333486   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:09.339907   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:08.035104   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:08.534784   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.035198   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.535319   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.035258   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.534634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.035604   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.535077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.035096   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.961614   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.962113   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.724418   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.724954   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.839467   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.839725   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.035100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:13.534793   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.035120   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.535318   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.035062   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.535127   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.034840   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.534830   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.035105   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.534928   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.961398   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.224300   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.729666   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.339542   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:17.840399   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:17.840424   73496 pod_ready.go:82] duration metric: took 10.506929493s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:17.840433   73496 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:19.846676   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.035126   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:18.535446   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.035154   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.535413   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.035580   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.534802   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.035030   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.535250   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.034785   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.534700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.460480   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.461609   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.223496   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.224908   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.847279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:24.347279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.034721   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.534672   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.035358   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.534813   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.535342   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.034934   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.534766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.035389   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.534831   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.961556   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.460682   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:25.723807   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:27.724515   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.346351   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:28.035226   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:28.535577   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.034984   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.535633   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.035509   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.534907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.535421   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.034719   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.534952   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:32.535067   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:32.575052   74485 cri.go:89] found id: ""
	I1105 19:12:32.575085   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.575096   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:32.575104   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:32.575164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:32.609969   74485 cri.go:89] found id: ""
	I1105 19:12:32.610003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.610011   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:32.610017   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:32.610065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:32.642343   74485 cri.go:89] found id: ""
	I1105 19:12:32.642369   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.642376   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:32.642381   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:32.642426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:28.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:30.960340   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.725101   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.224788   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:31.346559   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:33.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.680144   74485 cri.go:89] found id: ""
	I1105 19:12:32.680177   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.680188   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:32.680196   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:32.680270   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:32.715216   74485 cri.go:89] found id: ""
	I1105 19:12:32.715248   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.715259   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:32.715267   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:32.715321   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:32.751742   74485 cri.go:89] found id: ""
	I1105 19:12:32.751771   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.751795   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:32.751803   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:32.751865   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:32.786944   74485 cri.go:89] found id: ""
	I1105 19:12:32.787003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.787015   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:32.787023   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:32.787080   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:32.820523   74485 cri.go:89] found id: ""
	I1105 19:12:32.820550   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.820557   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:32.820565   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:32.820575   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:32.873960   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:32.874000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:32.889268   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:32.889296   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:33.011825   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:33.011846   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:33.011862   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:33.082785   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:33.082827   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:35.630678   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:35.644410   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:35.644492   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:35.679567   74485 cri.go:89] found id: ""
	I1105 19:12:35.679598   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.679607   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:35.679613   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:35.679666   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:35.713685   74485 cri.go:89] found id: ""
	I1105 19:12:35.713713   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.713721   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:35.713726   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:35.713789   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:35.749496   74485 cri.go:89] found id: ""
	I1105 19:12:35.749525   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.749536   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:35.749543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:35.749611   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:35.784228   74485 cri.go:89] found id: ""
	I1105 19:12:35.784254   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.784263   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:35.784269   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:35.784317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:35.818620   74485 cri.go:89] found id: ""
	I1105 19:12:35.818680   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.818696   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:35.818703   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:35.818769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:35.852525   74485 cri.go:89] found id: ""
	I1105 19:12:35.852554   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.852566   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:35.852574   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:35.852648   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:35.887906   74485 cri.go:89] found id: ""
	I1105 19:12:35.887931   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.887939   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:35.887944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:35.887994   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:35.920566   74485 cri.go:89] found id: ""
	I1105 19:12:35.920594   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.920602   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:35.920612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:35.920627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:35.972706   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:35.972742   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:35.986114   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:35.986141   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:36.067016   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:36.067044   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:36.067060   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:36.158947   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:36.159003   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:32.962679   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.461449   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:37.462001   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:34.724028   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:36.724174   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.728373   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.848563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.347478   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:40.347899   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.700738   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:38.713280   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:38.713351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:38.747293   74485 cri.go:89] found id: ""
	I1105 19:12:38.747335   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.747347   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:38.747355   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:38.747414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:38.781607   74485 cri.go:89] found id: ""
	I1105 19:12:38.781635   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.781643   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:38.781648   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:38.781703   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:38.815303   74485 cri.go:89] found id: ""
	I1105 19:12:38.815333   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.815342   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:38.815348   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:38.815397   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:38.850128   74485 cri.go:89] found id: ""
	I1105 19:12:38.850156   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.850166   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:38.850174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:38.850233   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:38.882470   74485 cri.go:89] found id: ""
	I1105 19:12:38.882493   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.882500   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:38.882506   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:38.882563   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:38.914669   74485 cri.go:89] found id: ""
	I1105 19:12:38.914698   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.914706   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:38.914713   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:38.914762   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:38.946521   74485 cri.go:89] found id: ""
	I1105 19:12:38.946548   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.946556   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:38.946561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:38.946613   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:38.979628   74485 cri.go:89] found id: ""
	I1105 19:12:38.979655   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.979663   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:38.979672   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:38.979682   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:39.056066   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:39.056102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.092303   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:39.092333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:39.143754   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:39.143790   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:39.156553   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:39.156587   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:39.220882   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:41.721766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:41.734823   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:41.734893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:41.768636   74485 cri.go:89] found id: ""
	I1105 19:12:41.768668   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.768685   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:41.768693   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:41.768750   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:41.809506   74485 cri.go:89] found id: ""
	I1105 19:12:41.809533   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.809541   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:41.809546   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:41.809606   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:41.849953   74485 cri.go:89] found id: ""
	I1105 19:12:41.849977   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.849985   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:41.849991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:41.850037   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:41.893042   74485 cri.go:89] found id: ""
	I1105 19:12:41.893072   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.893084   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:41.893091   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:41.893152   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:41.936259   74485 cri.go:89] found id: ""
	I1105 19:12:41.936282   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.936292   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:41.936298   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:41.936347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:41.970322   74485 cri.go:89] found id: ""
	I1105 19:12:41.970344   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.970353   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:41.970360   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:41.970427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:42.004351   74485 cri.go:89] found id: ""
	I1105 19:12:42.004375   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.004383   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:42.004388   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:42.004443   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:42.035136   74485 cri.go:89] found id: ""
	I1105 19:12:42.035163   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.035174   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:42.035185   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:42.035201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:42.086760   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:42.086801   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:42.100795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:42.100829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:42.167480   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:42.167509   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:42.167529   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:42.248625   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:42.248664   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.961606   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.461423   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:41.224956   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:43.724906   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.846509   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.847235   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.785100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:44.798182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:44.798248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:44.834080   74485 cri.go:89] found id: ""
	I1105 19:12:44.834107   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.834115   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:44.834120   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:44.834179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:44.870572   74485 cri.go:89] found id: ""
	I1105 19:12:44.870602   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.870613   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:44.870620   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:44.870691   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:44.908960   74485 cri.go:89] found id: ""
	I1105 19:12:44.908991   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.909002   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:44.909010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:44.909075   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:44.945310   74485 cri.go:89] found id: ""
	I1105 19:12:44.945342   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.945350   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:44.945355   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:44.945409   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:44.982893   74485 cri.go:89] found id: ""
	I1105 19:12:44.982935   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.982946   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:44.982953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:44.983030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:45.015529   74485 cri.go:89] found id: ""
	I1105 19:12:45.015559   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.015571   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:45.015578   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:45.015640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:45.047252   74485 cri.go:89] found id: ""
	I1105 19:12:45.047284   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.047295   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:45.047302   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:45.047364   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:45.082963   74485 cri.go:89] found id: ""
	I1105 19:12:45.083009   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.083018   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:45.083026   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:45.083039   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:45.131844   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:45.131881   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:45.145500   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:45.145530   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:45.214668   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:45.214709   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:45.214725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:45.291203   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:45.291243   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:44.963672   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.461610   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:46.223849   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:48.225352   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.346007   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:49.346691   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.831908   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:47.844873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:47.844957   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:47.881587   74485 cri.go:89] found id: ""
	I1105 19:12:47.881617   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.881628   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:47.881644   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:47.881714   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:47.918381   74485 cri.go:89] found id: ""
	I1105 19:12:47.918411   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.918423   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:47.918430   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:47.918491   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:47.950835   74485 cri.go:89] found id: ""
	I1105 19:12:47.950864   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.950880   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:47.950889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:47.950947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:47.985234   74485 cri.go:89] found id: ""
	I1105 19:12:47.985261   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.985272   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:47.985279   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:47.985338   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:48.019406   74485 cri.go:89] found id: ""
	I1105 19:12:48.019437   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.019448   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:48.019455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:48.019532   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:48.053126   74485 cri.go:89] found id: ""
	I1105 19:12:48.053160   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.053172   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:48.053180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:48.053241   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:48.086847   74485 cri.go:89] found id: ""
	I1105 19:12:48.086872   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.086879   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:48.086885   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:48.086944   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:48.122366   74485 cri.go:89] found id: ""
	I1105 19:12:48.122388   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.122396   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:48.122404   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:48.122421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:48.171579   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:48.171622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:48.185207   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:48.185234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:48.249553   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:48.249575   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:48.249586   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:48.323391   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:48.323427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:50.861939   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:50.874943   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:50.875041   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:50.911498   74485 cri.go:89] found id: ""
	I1105 19:12:50.911522   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.911530   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:50.911536   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:50.911591   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:50.946936   74485 cri.go:89] found id: ""
	I1105 19:12:50.946962   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.946988   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:50.947034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:50.947098   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:50.983220   74485 cri.go:89] found id: ""
	I1105 19:12:50.983246   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.983258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:50.983265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:50.983314   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:51.017052   74485 cri.go:89] found id: ""
	I1105 19:12:51.017078   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.017086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:51.017092   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:51.017141   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:51.051417   74485 cri.go:89] found id: ""
	I1105 19:12:51.051448   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.051459   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:51.051466   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:51.051529   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:51.085129   74485 cri.go:89] found id: ""
	I1105 19:12:51.085164   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.085177   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:51.085182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:51.085232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:51.122065   74485 cri.go:89] found id: ""
	I1105 19:12:51.122100   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.122113   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:51.122120   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:51.122178   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:51.154909   74485 cri.go:89] found id: ""
	I1105 19:12:51.154938   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.154946   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:51.154954   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:51.154966   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:51.167768   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:51.167798   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:51.231849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:51.231873   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:51.231897   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:51.314426   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:51.314487   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:51.356654   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:51.356685   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:49.961294   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.461707   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:50.723534   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.723821   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:51.347677   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.847328   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.911774   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:53.924884   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:53.924968   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:53.957690   74485 cri.go:89] found id: ""
	I1105 19:12:53.957719   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.957729   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:53.957737   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:53.957802   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:53.990717   74485 cri.go:89] found id: ""
	I1105 19:12:53.990744   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.990751   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:53.990757   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:53.990803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:54.023229   74485 cri.go:89] found id: ""
	I1105 19:12:54.023251   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.023258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:54.023263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:54.023320   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:54.056950   74485 cri.go:89] found id: ""
	I1105 19:12:54.056977   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.056987   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:54.056995   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:54.057056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:54.091729   74485 cri.go:89] found id: ""
	I1105 19:12:54.091756   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.091768   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:54.091776   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:54.091828   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:54.123964   74485 cri.go:89] found id: ""
	I1105 19:12:54.123991   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.124001   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:54.124009   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:54.124070   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:54.155164   74485 cri.go:89] found id: ""
	I1105 19:12:54.155195   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.155204   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:54.155209   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:54.155268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:54.188161   74485 cri.go:89] found id: ""
	I1105 19:12:54.188191   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.188202   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:54.188213   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:54.188226   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:54.240906   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:54.240941   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:54.254061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:54.254093   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:54.321973   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:54.322007   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:54.322026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:54.405106   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:54.405147   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:56.941801   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:56.954658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:56.954741   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:56.990372   74485 cri.go:89] found id: ""
	I1105 19:12:56.990400   74485 logs.go:282] 0 containers: []
	W1105 19:12:56.990411   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:56.990419   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:56.990479   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:57.023047   74485 cri.go:89] found id: ""
	I1105 19:12:57.023082   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.023093   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:57.023102   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:57.023163   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:57.054991   74485 cri.go:89] found id: ""
	I1105 19:12:57.055021   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.055030   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:57.055036   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:57.055094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:57.086182   74485 cri.go:89] found id: ""
	I1105 19:12:57.086214   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.086225   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:57.086233   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:57.086295   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:57.120322   74485 cri.go:89] found id: ""
	I1105 19:12:57.120350   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.120361   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:57.120368   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:57.120431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:57.153751   74485 cri.go:89] found id: ""
	I1105 19:12:57.153781   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.153790   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:57.153796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:57.153845   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:57.189208   74485 cri.go:89] found id: ""
	I1105 19:12:57.189234   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.189244   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:57.189251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:57.189317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:57.223259   74485 cri.go:89] found id: ""
	I1105 19:12:57.223292   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.223301   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:57.223308   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:57.223320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:57.273063   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:57.273098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:57.287759   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:57.287783   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:57.353387   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:57.353409   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:57.353421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:57.426374   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:57.426411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:54.462191   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.960479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:54.723926   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.724988   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.224704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:55.847609   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:58.347062   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.348243   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.965907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:59.979081   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:59.979149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:00.010955   74485 cri.go:89] found id: ""
	I1105 19:13:00.011001   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.011012   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:00.011021   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:00.011081   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:00.044800   74485 cri.go:89] found id: ""
	I1105 19:13:00.044825   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.044832   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:00.044838   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:00.044894   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:00.082999   74485 cri.go:89] found id: ""
	I1105 19:13:00.083040   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.083050   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:00.083059   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:00.083125   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:00.120792   74485 cri.go:89] found id: ""
	I1105 19:13:00.120826   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.120835   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:00.120840   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:00.120903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:00.153156   74485 cri.go:89] found id: ""
	I1105 19:13:00.153188   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.153200   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:00.153207   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:00.153273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:00.189039   74485 cri.go:89] found id: ""
	I1105 19:13:00.189066   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.189073   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:00.189079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:00.189143   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:00.220904   74485 cri.go:89] found id: ""
	I1105 19:13:00.220932   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.220942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:00.220950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:00.221012   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:00.255414   74485 cri.go:89] found id: ""
	I1105 19:13:00.255443   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.255454   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:00.255464   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:00.255480   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:00.329027   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:00.329050   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:00.329061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:00.405813   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:00.405847   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:00.443302   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:00.443332   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:00.498413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:00.498452   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:58.960870   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.962098   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:01.723865   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.724945   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:02.846369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:04.846751   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.011897   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:03.025351   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:03.025419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:03.058881   74485 cri.go:89] found id: ""
	I1105 19:13:03.058910   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.058920   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:03.058928   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:03.059018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:03.093549   74485 cri.go:89] found id: ""
	I1105 19:13:03.093580   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.093592   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:03.093600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:03.093660   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:03.132355   74485 cri.go:89] found id: ""
	I1105 19:13:03.132384   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.132395   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:03.132402   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:03.132463   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:03.164832   74485 cri.go:89] found id: ""
	I1105 19:13:03.164864   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.164875   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:03.164888   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:03.164947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:03.203187   74485 cri.go:89] found id: ""
	I1105 19:13:03.203213   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.203221   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:03.203226   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:03.203282   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:03.238867   74485 cri.go:89] found id: ""
	I1105 19:13:03.238899   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.238921   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:03.238928   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:03.239010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:03.276139   74485 cri.go:89] found id: ""
	I1105 19:13:03.276174   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.276187   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:03.276195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:03.276251   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:03.312588   74485 cri.go:89] found id: ""
	I1105 19:13:03.312613   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.312631   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:03.312639   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:03.312650   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:03.379754   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:03.379782   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:03.379797   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:03.455719   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:03.455754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.493428   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:03.493458   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:03.545447   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:03.545481   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.060213   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:06.074756   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:06.074831   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:06.111392   74485 cri.go:89] found id: ""
	I1105 19:13:06.111421   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.111429   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:06.111435   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:06.111493   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:06.147535   74485 cri.go:89] found id: ""
	I1105 19:13:06.147568   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.147579   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:06.147585   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:06.147646   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:06.183176   74485 cri.go:89] found id: ""
	I1105 19:13:06.183198   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.183205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:06.183211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:06.183262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:06.213957   74485 cri.go:89] found id: ""
	I1105 19:13:06.213983   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.213992   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:06.213997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:06.214060   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:06.251199   74485 cri.go:89] found id: ""
	I1105 19:13:06.251227   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.251234   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:06.251240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:06.251297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:06.288128   74485 cri.go:89] found id: ""
	I1105 19:13:06.288157   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.288167   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:06.288174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:06.288236   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:06.325265   74485 cri.go:89] found id: ""
	I1105 19:13:06.325296   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.325306   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:06.325314   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:06.325375   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:06.359649   74485 cri.go:89] found id: ""
	I1105 19:13:06.359689   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.359700   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:06.359710   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:06.359725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:06.408423   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:06.408456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.421776   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:06.421804   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:06.487464   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:06.487493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:06.487507   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:06.565789   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:06.565829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.461192   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.725002   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:08.225146   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:07.346498   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.347264   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.104578   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:09.117930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:09.118022   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:09.156055   74485 cri.go:89] found id: ""
	I1105 19:13:09.156083   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.156093   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:09.156101   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:09.156161   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:09.190470   74485 cri.go:89] found id: ""
	I1105 19:13:09.190499   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.190509   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:09.190516   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:09.190576   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:09.222568   74485 cri.go:89] found id: ""
	I1105 19:13:09.222595   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.222606   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:09.222612   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:09.222677   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:09.260251   74485 cri.go:89] found id: ""
	I1105 19:13:09.260282   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.260292   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:09.260300   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:09.260362   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:09.296006   74485 cri.go:89] found id: ""
	I1105 19:13:09.296036   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.296047   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:09.296054   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:09.296118   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:09.331213   74485 cri.go:89] found id: ""
	I1105 19:13:09.331246   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.331257   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:09.331265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:09.331333   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:09.364286   74485 cri.go:89] found id: ""
	I1105 19:13:09.364316   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.364327   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:09.364335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:09.364445   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:09.398060   74485 cri.go:89] found id: ""
	I1105 19:13:09.398084   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.398092   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:09.398101   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:09.398113   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:09.447373   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:09.447409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:09.461483   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:09.461514   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:09.528213   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:09.528236   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:09.528248   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:09.607397   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:09.607430   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.146158   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:12.159183   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:12.159262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:12.193917   74485 cri.go:89] found id: ""
	I1105 19:13:12.193952   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.193963   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:12.193971   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:12.194036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:12.226558   74485 cri.go:89] found id: ""
	I1105 19:13:12.226585   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.226594   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:12.226600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:12.226662   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:12.258437   74485 cri.go:89] found id: ""
	I1105 19:13:12.258469   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.258481   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:12.258488   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:12.258557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:12.291308   74485 cri.go:89] found id: ""
	I1105 19:13:12.291341   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.291353   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:12.291361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:12.291431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:12.325768   74485 cri.go:89] found id: ""
	I1105 19:13:12.325801   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.325812   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:12.325819   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:12.325884   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:12.361077   74485 cri.go:89] found id: ""
	I1105 19:13:12.361100   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.361108   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:12.361118   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:12.361179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:12.394769   74485 cri.go:89] found id: ""
	I1105 19:13:12.394791   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.394800   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:12.394806   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:12.394864   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:12.430138   74485 cri.go:89] found id: ""
	I1105 19:13:12.430167   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.430177   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:12.430189   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:12.430200   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.472596   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:12.472637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:12.523107   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:12.523143   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:12.535797   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:12.535824   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:12.604088   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:12.604108   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:12.604123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:08.460647   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.462830   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.225468   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.225693   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:11.849320   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.347487   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:15.185725   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:15.200158   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:15.200238   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:15.238309   74485 cri.go:89] found id: ""
	I1105 19:13:15.238334   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.238342   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:15.238349   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:15.238404   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:15.272897   74485 cri.go:89] found id: ""
	I1105 19:13:15.272927   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.272938   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:15.272945   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:15.273013   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:15.307700   74485 cri.go:89] found id: ""
	I1105 19:13:15.307726   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.307737   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:15.307744   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:15.307810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:15.340156   74485 cri.go:89] found id: ""
	I1105 19:13:15.340182   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.340196   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:15.340202   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:15.340252   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:15.375930   74485 cri.go:89] found id: ""
	I1105 19:13:15.375963   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.375971   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:15.375976   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:15.376031   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:15.409876   74485 cri.go:89] found id: ""
	I1105 19:13:15.409905   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.409915   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:15.409922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:15.409984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:15.442781   74485 cri.go:89] found id: ""
	I1105 19:13:15.442808   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.442819   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:15.442825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:15.442896   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:15.480578   74485 cri.go:89] found id: ""
	I1105 19:13:15.480606   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.480614   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:15.480623   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:15.480634   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:15.530910   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:15.530952   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:15.544351   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:15.544382   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:15.618345   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:15.618373   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:15.618396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:15.704408   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:15.704451   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:14.961408   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.961486   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.724130   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.724204   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.724704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.347818   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.846423   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.244882   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:18.258667   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:18.258758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:18.292140   74485 cri.go:89] found id: ""
	I1105 19:13:18.292163   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.292171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:18.292178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:18.292235   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:18.324954   74485 cri.go:89] found id: ""
	I1105 19:13:18.324979   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.324985   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:18.324991   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:18.325048   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:18.361943   74485 cri.go:89] found id: ""
	I1105 19:13:18.361972   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.361983   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:18.361991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:18.362062   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:18.396012   74485 cri.go:89] found id: ""
	I1105 19:13:18.396036   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.396044   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:18.396050   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:18.396097   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:18.428852   74485 cri.go:89] found id: ""
	I1105 19:13:18.428875   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.428883   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:18.428889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:18.428946   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:18.464364   74485 cri.go:89] found id: ""
	I1105 19:13:18.464390   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.464397   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:18.464404   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:18.464464   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:18.496478   74485 cri.go:89] found id: ""
	I1105 19:13:18.496505   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.496514   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:18.496519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:18.496577   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:18.530313   74485 cri.go:89] found id: ""
	I1105 19:13:18.530339   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.530348   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:18.530356   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:18.530368   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:18.582593   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:18.582627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:18.596580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:18.596616   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:18.663920   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:18.663959   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:18.663974   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:18.740706   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:18.740746   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.281614   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:21.295841   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:21.295919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:21.330832   74485 cri.go:89] found id: ""
	I1105 19:13:21.330856   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.330864   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:21.330869   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:21.330922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:21.365228   74485 cri.go:89] found id: ""
	I1105 19:13:21.365257   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.365265   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:21.365269   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:21.365317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:21.418675   74485 cri.go:89] found id: ""
	I1105 19:13:21.418702   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.418719   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:21.418727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:21.418793   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:21.453966   74485 cri.go:89] found id: ""
	I1105 19:13:21.453994   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.454003   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:21.454008   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:21.454058   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:21.492030   74485 cri.go:89] found id: ""
	I1105 19:13:21.492056   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.492067   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:21.492078   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:21.492128   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:21.529146   74485 cri.go:89] found id: ""
	I1105 19:13:21.529174   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.529183   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:21.529190   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:21.529250   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:21.566491   74485 cri.go:89] found id: ""
	I1105 19:13:21.566519   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.566528   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:21.566533   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:21.566595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:21.605720   74485 cri.go:89] found id: ""
	I1105 19:13:21.605745   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.605754   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:21.605762   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:21.605772   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:21.682385   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:21.682408   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:21.682420   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:21.764519   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:21.764557   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.805090   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:21.805117   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:21.857560   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:21.857593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:19.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.961995   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.224702   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.226864   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:20.850915   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.346819   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.347230   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:24.371420   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:24.384566   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:24.384634   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:24.416283   74485 cri.go:89] found id: ""
	I1105 19:13:24.416308   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.416319   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:24.416327   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:24.416388   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:24.452875   74485 cri.go:89] found id: ""
	I1105 19:13:24.452899   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.452907   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:24.452913   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:24.452964   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:24.489946   74485 cri.go:89] found id: ""
	I1105 19:13:24.489974   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.489992   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:24.490000   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:24.490056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:24.527348   74485 cri.go:89] found id: ""
	I1105 19:13:24.527377   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.527388   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:24.527395   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:24.527451   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:24.558992   74485 cri.go:89] found id: ""
	I1105 19:13:24.559024   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.559035   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:24.559047   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:24.559105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:24.591405   74485 cri.go:89] found id: ""
	I1105 19:13:24.591437   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.591448   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:24.591455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:24.591516   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.625002   74485 cri.go:89] found id: ""
	I1105 19:13:24.625031   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.625040   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:24.625048   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:24.625114   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:24.657867   74485 cri.go:89] found id: ""
	I1105 19:13:24.657896   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.657907   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:24.657918   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:24.657931   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:24.708444   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:24.708482   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:24.721771   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:24.721814   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:24.793946   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:24.793980   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:24.793996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:24.875130   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:24.875167   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:27.412872   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:27.426996   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:27.427072   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:27.462434   74485 cri.go:89] found id: ""
	I1105 19:13:27.462458   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.462468   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:27.462475   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:27.462536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:27.496916   74485 cri.go:89] found id: ""
	I1105 19:13:27.496951   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.496962   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:27.496969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:27.497035   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:27.528826   74485 cri.go:89] found id: ""
	I1105 19:13:27.528853   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.528861   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:27.528867   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:27.528919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:27.563164   74485 cri.go:89] found id: ""
	I1105 19:13:27.563193   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.563204   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:27.563210   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:27.563284   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:27.600136   74485 cri.go:89] found id: ""
	I1105 19:13:27.600164   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.600174   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:27.600180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:27.600247   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:27.634326   74485 cri.go:89] found id: ""
	I1105 19:13:27.634358   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.634368   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:27.634377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:27.634452   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.462295   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:26.961567   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.723935   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.725498   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.847362   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.349542   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.668154   74485 cri.go:89] found id: ""
	I1105 19:13:27.668185   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.668196   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:27.668203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:27.668263   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:27.706016   74485 cri.go:89] found id: ""
	I1105 19:13:27.706043   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.706051   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:27.706059   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:27.706071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:27.755890   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:27.755929   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:27.773038   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:27.773063   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:27.863392   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:27.863414   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:27.863429   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:27.949149   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:27.949185   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.489333   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:30.502794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:30.502878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:30.536263   74485 cri.go:89] found id: ""
	I1105 19:13:30.536289   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.536297   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:30.536302   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:30.536347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:30.570418   74485 cri.go:89] found id: ""
	I1105 19:13:30.570445   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.570455   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:30.570462   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:30.570523   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:30.601972   74485 cri.go:89] found id: ""
	I1105 19:13:30.602003   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.602013   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:30.602020   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:30.602086   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:30.634151   74485 cri.go:89] found id: ""
	I1105 19:13:30.634183   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.634195   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:30.634203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:30.634265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:30.666384   74485 cri.go:89] found id: ""
	I1105 19:13:30.666415   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.666425   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:30.666433   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:30.666498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:30.699587   74485 cri.go:89] found id: ""
	I1105 19:13:30.699619   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.699631   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:30.699639   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:30.699699   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:30.731917   74485 cri.go:89] found id: ""
	I1105 19:13:30.731972   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.731983   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:30.731990   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:30.732051   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:30.768807   74485 cri.go:89] found id: ""
	I1105 19:13:30.768832   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.768840   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:30.768849   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:30.768860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:30.848594   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:30.848626   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.889031   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:30.889067   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:30.940550   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:30.940588   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:30.953810   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:30.953845   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:31.023633   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:29.461686   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:31.961484   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.225024   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.723965   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.847298   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:35.347135   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:33.524150   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:33.539025   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:33.539112   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:33.584756   74485 cri.go:89] found id: ""
	I1105 19:13:33.584786   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.584799   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:33.584807   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:33.584869   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:33.624785   74485 cri.go:89] found id: ""
	I1105 19:13:33.624816   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.624829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:33.624836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:33.625025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:33.668750   74485 cri.go:89] found id: ""
	I1105 19:13:33.668783   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.668794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:33.668804   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:33.668867   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:33.701675   74485 cri.go:89] found id: ""
	I1105 19:13:33.701707   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.701735   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:33.701743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:33.701817   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:33.737368   74485 cri.go:89] found id: ""
	I1105 19:13:33.737393   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.737401   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:33.737407   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:33.737458   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:33.770589   74485 cri.go:89] found id: ""
	I1105 19:13:33.770620   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.770630   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:33.770638   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:33.770704   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:33.802635   74485 cri.go:89] found id: ""
	I1105 19:13:33.802668   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.802680   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:33.802687   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:33.802751   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:33.839274   74485 cri.go:89] found id: ""
	I1105 19:13:33.839301   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.839309   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:33.839317   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:33.839328   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:33.881049   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:33.881090   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:33.932704   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:33.932743   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:33.945979   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:33.946007   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:34.017355   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:34.017375   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:34.017390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:36.596284   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:36.608240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:36.608306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:36.641846   74485 cri.go:89] found id: ""
	I1105 19:13:36.641878   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.641887   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:36.641901   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:36.641966   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:36.676553   74485 cri.go:89] found id: ""
	I1105 19:13:36.676584   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.676595   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:36.676602   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:36.676669   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:36.711931   74485 cri.go:89] found id: ""
	I1105 19:13:36.711961   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.711972   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:36.711980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:36.712042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:36.748510   74485 cri.go:89] found id: ""
	I1105 19:13:36.748534   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.748542   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:36.748547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:36.748596   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:36.781869   74485 cri.go:89] found id: ""
	I1105 19:13:36.781899   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.781912   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:36.781922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:36.781983   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:36.816574   74485 cri.go:89] found id: ""
	I1105 19:13:36.816597   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.816605   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:36.816610   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:36.816658   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:36.852894   74485 cri.go:89] found id: ""
	I1105 19:13:36.852921   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.852928   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:36.852934   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:36.852996   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:36.891732   74485 cri.go:89] found id: ""
	I1105 19:13:36.891764   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.891783   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:36.891795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:36.891810   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:36.964948   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:36.964972   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:36.964987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:37.043727   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:37.043765   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:37.084306   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:37.084333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:37.133238   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:37.133274   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:34.461773   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:36.960440   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:34.724805   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.224830   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.227912   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.347383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.347770   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.647492   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:39.659944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:39.660025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:39.695382   74485 cri.go:89] found id: ""
	I1105 19:13:39.695405   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.695415   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:39.695422   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:39.695480   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:39.731807   74485 cri.go:89] found id: ""
	I1105 19:13:39.731833   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.731841   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:39.731846   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:39.731895   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:39.766913   74485 cri.go:89] found id: ""
	I1105 19:13:39.766945   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.766955   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:39.766963   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:39.767049   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:39.800265   74485 cri.go:89] found id: ""
	I1105 19:13:39.800288   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.800296   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:39.800301   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:39.800346   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:39.832753   74485 cri.go:89] found id: ""
	I1105 19:13:39.832781   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.832789   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:39.832794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:39.832843   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:39.865950   74485 cri.go:89] found id: ""
	I1105 19:13:39.865980   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.865990   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:39.865997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:39.866046   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:39.902918   74485 cri.go:89] found id: ""
	I1105 19:13:39.902948   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.902957   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:39.902962   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:39.903039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:39.935086   74485 cri.go:89] found id: ""
	I1105 19:13:39.935117   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.935129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:39.935139   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:39.935152   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:39.997935   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:39.997961   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:39.997976   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:40.076794   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:40.076852   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:40.114178   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:40.114209   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:40.163512   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:40.163550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:38.961003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:40.962241   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.724237   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:43.725317   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.847149   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:44.346097   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:42.676843   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:42.689855   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:42.689930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:42.724108   74485 cri.go:89] found id: ""
	I1105 19:13:42.724139   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.724148   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:42.724156   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:42.724218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:42.760816   74485 cri.go:89] found id: ""
	I1105 19:13:42.760844   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.760854   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:42.760861   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:42.760924   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:42.795111   74485 cri.go:89] found id: ""
	I1105 19:13:42.795134   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.795142   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:42.795147   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:42.795195   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:42.832964   74485 cri.go:89] found id: ""
	I1105 19:13:42.832988   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.832997   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:42.833003   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:42.833065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:42.868817   74485 cri.go:89] found id: ""
	I1105 19:13:42.868848   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.868858   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:42.868865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:42.868933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:42.902015   74485 cri.go:89] found id: ""
	I1105 19:13:42.902044   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.902051   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:42.902056   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:42.902146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:42.934298   74485 cri.go:89] found id: ""
	I1105 19:13:42.934322   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.934330   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:42.934335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:42.934385   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:42.969804   74485 cri.go:89] found id: ""
	I1105 19:13:42.969831   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.969843   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:42.969854   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:42.969873   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:43.019922   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:43.019959   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:43.033594   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:43.033622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:43.108220   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:43.108240   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:43.108251   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:43.191946   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:43.191987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:45.730728   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:45.743344   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:45.743419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:45.777693   74485 cri.go:89] found id: ""
	I1105 19:13:45.777728   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.777739   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:45.777747   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:45.777810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:45.810195   74485 cri.go:89] found id: ""
	I1105 19:13:45.810222   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.810233   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:45.810240   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:45.810308   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:45.851210   74485 cri.go:89] found id: ""
	I1105 19:13:45.851240   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.851247   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:45.851252   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:45.851311   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:45.885501   74485 cri.go:89] found id: ""
	I1105 19:13:45.885531   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.885540   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:45.885546   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:45.885595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:45.921638   74485 cri.go:89] found id: ""
	I1105 19:13:45.921667   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.921676   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:45.921684   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:45.921745   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:45.954341   74485 cri.go:89] found id: ""
	I1105 19:13:45.954373   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.954384   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:45.954394   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:45.954461   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:45.988840   74485 cri.go:89] found id: ""
	I1105 19:13:45.988865   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.988873   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:45.988879   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:45.988949   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:46.025409   74485 cri.go:89] found id: ""
	I1105 19:13:46.025441   74485 logs.go:282] 0 containers: []
	W1105 19:13:46.025458   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:46.025470   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:46.025486   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:46.037763   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:46.037787   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:46.112619   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:46.112663   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:46.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:46.192165   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:46.192199   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:46.233235   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:46.233263   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:42.962569   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:45.461256   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:47.461781   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.225004   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.723774   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.346687   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.787685   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:48.800681   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:48.800749   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:48.835344   74485 cri.go:89] found id: ""
	I1105 19:13:48.835366   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.835374   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:48.835383   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:48.835429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:48.867447   74485 cri.go:89] found id: ""
	I1105 19:13:48.867474   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.867483   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:48.867488   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:48.867536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:48.899135   74485 cri.go:89] found id: ""
	I1105 19:13:48.899160   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.899167   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:48.899172   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:48.899221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:48.932208   74485 cri.go:89] found id: ""
	I1105 19:13:48.932243   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.932255   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:48.932263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:48.932326   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:48.967174   74485 cri.go:89] found id: ""
	I1105 19:13:48.967202   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.967210   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:48.967215   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:48.967267   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:48.998902   74485 cri.go:89] found id: ""
	I1105 19:13:48.998932   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.998942   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:48.998950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:48.999030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:49.030946   74485 cri.go:89] found id: ""
	I1105 19:13:49.030988   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.030999   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:49.031006   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:49.031074   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:49.063489   74485 cri.go:89] found id: ""
	I1105 19:13:49.063517   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.063528   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:49.063540   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:49.063555   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:49.116433   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:49.116477   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:49.131439   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:49.131476   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:49.199770   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:49.199795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:49.199809   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:49.275503   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:49.275543   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:51.816208   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:51.829328   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:51.829399   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:51.863320   74485 cri.go:89] found id: ""
	I1105 19:13:51.863346   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.863354   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:51.863359   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:51.863406   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:51.896589   74485 cri.go:89] found id: ""
	I1105 19:13:51.896618   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.896628   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:51.896635   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:51.896697   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:51.933744   74485 cri.go:89] found id: ""
	I1105 19:13:51.933769   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.933776   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:51.933781   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:51.933829   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:51.970806   74485 cri.go:89] found id: ""
	I1105 19:13:51.970829   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.970836   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:51.970842   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:51.970889   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:52.004087   74485 cri.go:89] found id: ""
	I1105 19:13:52.004116   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.004124   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:52.004129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:52.004186   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:52.041721   74485 cri.go:89] found id: ""
	I1105 19:13:52.041752   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.041763   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:52.041771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:52.041835   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:52.079253   74485 cri.go:89] found id: ""
	I1105 19:13:52.079277   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.079285   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:52.079292   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:52.079351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:52.112604   74485 cri.go:89] found id: ""
	I1105 19:13:52.112642   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.112653   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:52.112664   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:52.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:52.160799   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:52.160841   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:52.174323   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:52.174355   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:52.247358   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:52.247383   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:52.247395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:52.326071   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:52.326108   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:49.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.461239   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.724514   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.724742   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.848418   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:53.346329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.347199   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:54.866454   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:54.879015   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:54.879093   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:54.911729   74485 cri.go:89] found id: ""
	I1105 19:13:54.911765   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.911777   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:54.911785   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:54.911846   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:54.943137   74485 cri.go:89] found id: ""
	I1105 19:13:54.943169   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.943185   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:54.943193   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:54.943253   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:54.977951   74485 cri.go:89] found id: ""
	I1105 19:13:54.977980   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.977991   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:54.977998   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:54.978061   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:55.009453   74485 cri.go:89] found id: ""
	I1105 19:13:55.009478   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.009486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:55.009491   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:55.009537   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:55.040790   74485 cri.go:89] found id: ""
	I1105 19:13:55.040814   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.040821   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:55.040827   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:55.040878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:55.073401   74485 cri.go:89] found id: ""
	I1105 19:13:55.073430   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.073441   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:55.073449   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:55.073508   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:55.105419   74485 cri.go:89] found id: ""
	I1105 19:13:55.105443   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.105451   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:55.105456   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:55.105511   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:55.137363   74485 cri.go:89] found id: ""
	I1105 19:13:55.137395   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.137406   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:55.137416   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:55.137431   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:55.174176   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:55.174201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:55.221658   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:55.221693   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:55.235044   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:55.235070   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:55.308192   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:55.308218   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:55.308234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:54.461424   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:56.961198   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.223920   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.224915   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.847329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:00.347371   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.892462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:57.905472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:57.905543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:57.946044   74485 cri.go:89] found id: ""
	I1105 19:13:57.946071   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.946081   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:57.946089   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:57.946149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:57.980762   74485 cri.go:89] found id: ""
	I1105 19:13:57.980791   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.980803   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:57.980811   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:57.980874   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:58.013351   74485 cri.go:89] found id: ""
	I1105 19:13:58.013374   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.013381   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:58.013386   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:58.013433   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:58.049056   74485 cri.go:89] found id: ""
	I1105 19:13:58.049083   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.049091   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:58.049097   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:58.049147   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:58.081476   74485 cri.go:89] found id: ""
	I1105 19:13:58.081507   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.081517   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:58.081524   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:58.081583   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:58.114526   74485 cri.go:89] found id: ""
	I1105 19:13:58.114554   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.114564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:58.114571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:58.114630   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:58.148219   74485 cri.go:89] found id: ""
	I1105 19:13:58.148243   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.148252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:58.148257   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:58.148312   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:58.183254   74485 cri.go:89] found id: ""
	I1105 19:13:58.183277   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.183285   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:58.183292   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:58.183304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:58.234747   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:58.234785   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:58.248269   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:58.248300   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:58.313290   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:58.313312   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:58.313327   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:58.389847   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:58.389889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:00.927957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:00.941525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:00.941593   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:00.974891   74485 cri.go:89] found id: ""
	I1105 19:14:00.974920   74485 logs.go:282] 0 containers: []
	W1105 19:14:00.974931   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:00.974938   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:00.975018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:01.008224   74485 cri.go:89] found id: ""
	I1105 19:14:01.008250   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.008262   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:01.008270   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:01.008328   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:01.044514   74485 cri.go:89] found id: ""
	I1105 19:14:01.044545   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.044553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:01.044559   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:01.044614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:01.077091   74485 cri.go:89] found id: ""
	I1105 19:14:01.077124   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.077135   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:01.077141   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:01.077197   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:01.109947   74485 cri.go:89] found id: ""
	I1105 19:14:01.109976   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.109986   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:01.109994   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:01.110054   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:01.146162   74485 cri.go:89] found id: ""
	I1105 19:14:01.146193   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.146203   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:01.146211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:01.146275   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:01.180335   74485 cri.go:89] found id: ""
	I1105 19:14:01.180360   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.180370   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:01.180377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:01.180436   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:01.216160   74485 cri.go:89] found id: ""
	I1105 19:14:01.216189   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.216199   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:01.216221   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:01.216236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:01.229426   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:01.229455   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:01.298847   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:01.298874   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:01.298889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:01.375255   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:01.375299   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:01.417946   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:01.418026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:59.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.961362   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:59.724103   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.724976   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.725344   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:02.349032   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:04.847734   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.973713   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:03.987128   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:03.987198   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:04.020050   74485 cri.go:89] found id: ""
	I1105 19:14:04.020081   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.020091   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:04.020098   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:04.020164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:04.053458   74485 cri.go:89] found id: ""
	I1105 19:14:04.053485   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.053492   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:04.053498   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:04.053544   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:04.086417   74485 cri.go:89] found id: ""
	I1105 19:14:04.086442   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.086455   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:04.086461   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:04.086513   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:04.122035   74485 cri.go:89] found id: ""
	I1105 19:14:04.122059   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.122067   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:04.122073   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:04.122120   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:04.158732   74485 cri.go:89] found id: ""
	I1105 19:14:04.158758   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.158765   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:04.158771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:04.158822   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:04.190497   74485 cri.go:89] found id: ""
	I1105 19:14:04.190525   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.190536   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:04.190543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:04.190604   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:04.222040   74485 cri.go:89] found id: ""
	I1105 19:14:04.222066   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.222074   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:04.222079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:04.222131   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:04.258753   74485 cri.go:89] found id: ""
	I1105 19:14:04.258781   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.258793   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:04.258804   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:04.258819   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:04.299966   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:04.300052   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:04.355364   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:04.355395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:04.368954   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:04.368980   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:04.431658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:04.431688   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:04.431700   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.015289   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:07.029580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:07.029644   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:07.066931   74485 cri.go:89] found id: ""
	I1105 19:14:07.066964   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.066993   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:07.067004   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:07.067059   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:07.104315   74485 cri.go:89] found id: ""
	I1105 19:14:07.104341   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.104349   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:07.104354   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:07.104401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:07.141271   74485 cri.go:89] found id: ""
	I1105 19:14:07.141298   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.141305   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:07.141311   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:07.141360   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:07.174600   74485 cri.go:89] found id: ""
	I1105 19:14:07.174631   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.174643   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:07.174653   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:07.174707   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:07.211920   74485 cri.go:89] found id: ""
	I1105 19:14:07.211958   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.211969   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:07.211975   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:07.212027   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:07.248238   74485 cri.go:89] found id: ""
	I1105 19:14:07.248269   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.248280   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:07.248286   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:07.248344   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:07.279833   74485 cri.go:89] found id: ""
	I1105 19:14:07.279864   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.279874   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:07.279881   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:07.279931   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:07.317411   74485 cri.go:89] found id: ""
	I1105 19:14:07.317441   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.317452   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:07.317461   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:07.317474   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:07.390499   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:07.390535   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:07.390556   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.488858   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:07.488895   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:07.528612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:07.528645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:07.581884   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:07.581927   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:03.961433   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.460953   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.223402   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:08.723797   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:07.348258   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:09.846465   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.096089   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:10.110828   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:10.110898   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:10.147299   74485 cri.go:89] found id: ""
	I1105 19:14:10.147332   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.147344   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:10.147350   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:10.147401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:10.181457   74485 cri.go:89] found id: ""
	I1105 19:14:10.181482   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.181489   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:10.181495   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:10.181540   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:10.215210   74485 cri.go:89] found id: ""
	I1105 19:14:10.215241   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.215252   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:10.215259   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:10.215319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:10.249587   74485 cri.go:89] found id: ""
	I1105 19:14:10.249609   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.249617   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:10.249625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:10.249679   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:10.282566   74485 cri.go:89] found id: ""
	I1105 19:14:10.282591   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.282598   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:10.282604   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:10.282672   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:10.314312   74485 cri.go:89] found id: ""
	I1105 19:14:10.314344   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.314355   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:10.314361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:10.314415   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:10.346988   74485 cri.go:89] found id: ""
	I1105 19:14:10.347016   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.347028   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:10.347035   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:10.347088   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:10.381326   74485 cri.go:89] found id: ""
	I1105 19:14:10.381354   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.381370   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:10.381380   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:10.381394   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:10.418311   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:10.418344   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:10.469559   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:10.469590   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:10.482394   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:10.482427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:10.551831   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:10.551854   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:10.551870   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:08.462072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.961478   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:12.724974   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:11.846737   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:14.346050   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:13.127576   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:13.143182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:13.143242   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:13.188794   74485 cri.go:89] found id: ""
	I1105 19:14:13.188827   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.188839   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:13.188846   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:13.188897   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:13.221790   74485 cri.go:89] found id: ""
	I1105 19:14:13.221818   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.221829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:13.221836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:13.221893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:13.255164   74485 cri.go:89] found id: ""
	I1105 19:14:13.255194   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.255205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:13.255212   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:13.255272   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:13.288203   74485 cri.go:89] found id: ""
	I1105 19:14:13.288231   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.288241   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:13.288249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:13.288307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:13.321438   74485 cri.go:89] found id: ""
	I1105 19:14:13.321463   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.321475   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:13.321482   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:13.321541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:13.361858   74485 cri.go:89] found id: ""
	I1105 19:14:13.361886   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.361897   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:13.361905   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:13.361979   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:13.394210   74485 cri.go:89] found id: ""
	I1105 19:14:13.394239   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.394252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:13.394260   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:13.394324   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:13.434665   74485 cri.go:89] found id: ""
	I1105 19:14:13.434697   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.434705   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:13.434712   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:13.434724   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:13.447849   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:13.447875   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:13.514353   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:13.514377   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:13.514390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:13.590746   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:13.590784   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:13.627704   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:13.627732   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:16.180171   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:16.193282   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:16.193342   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:16.230087   74485 cri.go:89] found id: ""
	I1105 19:14:16.230118   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.230128   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:16.230137   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:16.230200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:16.264315   74485 cri.go:89] found id: ""
	I1105 19:14:16.264348   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.264360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:16.264368   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:16.264429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:16.298197   74485 cri.go:89] found id: ""
	I1105 19:14:16.298231   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.298243   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:16.298251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:16.298316   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:16.333149   74485 cri.go:89] found id: ""
	I1105 19:14:16.333180   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.333193   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:16.333203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:16.333268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:16.366863   74485 cri.go:89] found id: ""
	I1105 19:14:16.366887   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.366895   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:16.366900   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:16.366947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:16.400434   74485 cri.go:89] found id: ""
	I1105 19:14:16.400458   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.400466   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:16.400472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:16.400524   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:16.435475   74485 cri.go:89] found id: ""
	I1105 19:14:16.435497   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.435504   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:16.435510   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:16.435560   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:16.470577   74485 cri.go:89] found id: ""
	I1105 19:14:16.470604   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.470612   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:16.470620   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:16.470632   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:16.483061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:16.483094   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:16.550662   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:16.550690   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:16.550702   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:16.629372   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:16.629411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:16.669488   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:16.669526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:12.961576   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.461132   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.461748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.224068   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.225065   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:16.347305   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:18.847161   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.219244   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:19.232682   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:19.232744   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:19.264594   74485 cri.go:89] found id: ""
	I1105 19:14:19.264624   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.264635   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:19.264649   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:19.264708   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:19.301434   74485 cri.go:89] found id: ""
	I1105 19:14:19.301468   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.301479   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:19.301487   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:19.301558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:19.333465   74485 cri.go:89] found id: ""
	I1105 19:14:19.333494   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.333502   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:19.333508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:19.333558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:19.365865   74485 cri.go:89] found id: ""
	I1105 19:14:19.365892   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.365900   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:19.365906   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:19.365958   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:19.406533   74485 cri.go:89] found id: ""
	I1105 19:14:19.406563   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.406575   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:19.406583   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:19.406639   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:19.439351   74485 cri.go:89] found id: ""
	I1105 19:14:19.439377   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.439386   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:19.439392   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:19.439438   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:19.475033   74485 cri.go:89] found id: ""
	I1105 19:14:19.475058   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.475065   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:19.475070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:19.475119   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:19.508638   74485 cri.go:89] found id: ""
	I1105 19:14:19.508662   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.508670   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:19.508678   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:19.508689   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:19.588268   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:19.588293   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:19.588304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:19.671382   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:19.671415   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:19.716497   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:19.716526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:19.769686   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:19.769722   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.283476   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:22.296393   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:22.296456   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:22.331226   74485 cri.go:89] found id: ""
	I1105 19:14:22.331247   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.331255   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:22.331261   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:22.331306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:22.363466   74485 cri.go:89] found id: ""
	I1105 19:14:22.363499   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.363510   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:22.363518   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:22.363586   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:22.397025   74485 cri.go:89] found id: ""
	I1105 19:14:22.397052   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.397061   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:22.397066   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:22.397116   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:22.429450   74485 cri.go:89] found id: ""
	I1105 19:14:22.429476   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.429486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:22.429493   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:22.429554   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:22.461615   74485 cri.go:89] found id: ""
	I1105 19:14:22.461643   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.461654   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:22.461660   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:22.461728   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:22.492470   74485 cri.go:89] found id: ""
	I1105 19:14:22.492502   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.492513   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:22.492521   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:22.492587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:22.525335   74485 cri.go:89] found id: ""
	I1105 19:14:22.525358   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.525366   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:22.525372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:22.525423   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:22.558854   74485 cri.go:89] found id: ""
	I1105 19:14:22.558881   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.558890   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:22.558901   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:22.558916   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:22.608638   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:22.608674   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.621769   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:22.621800   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:14:19.461812   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.960286   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.724482   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:22.224505   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:24.225072   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.347018   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:23.347099   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	W1105 19:14:22.688971   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:22.688998   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:22.689012   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:22.770517   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:22.770558   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:25.315778   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:25.335372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:25.335444   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:25.383988   74485 cri.go:89] found id: ""
	I1105 19:14:25.384019   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.384029   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:25.384036   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:25.384096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:25.432070   74485 cri.go:89] found id: ""
	I1105 19:14:25.432103   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.432115   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:25.432122   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:25.432184   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:25.464859   74485 cri.go:89] found id: ""
	I1105 19:14:25.464891   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.464902   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:25.464909   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:25.464976   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:25.498684   74485 cri.go:89] found id: ""
	I1105 19:14:25.498712   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.498719   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:25.498724   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:25.498777   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:25.532998   74485 cri.go:89] found id: ""
	I1105 19:14:25.533023   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.533032   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:25.533039   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:25.533084   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:25.568101   74485 cri.go:89] found id: ""
	I1105 19:14:25.568130   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.568138   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:25.568144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:25.568208   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:25.600470   74485 cri.go:89] found id: ""
	I1105 19:14:25.600495   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.600503   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:25.600509   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:25.600564   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:25.631792   74485 cri.go:89] found id: ""
	I1105 19:14:25.631824   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.631834   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:25.631845   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:25.631860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:25.683820   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:25.683856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:25.698066   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:25.698095   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:25.764838   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:25.764869   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:25.764886   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:25.838791   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:25.838828   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:23.966002   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.460153   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.724324   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:29.223490   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:25.847528   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.346739   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.376183   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:28.389686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:28.389760   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:28.424180   74485 cri.go:89] found id: ""
	I1105 19:14:28.424209   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.424221   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:28.424229   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:28.424289   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:28.462742   74485 cri.go:89] found id: ""
	I1105 19:14:28.462765   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.462777   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:28.462784   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:28.462839   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:28.494550   74485 cri.go:89] found id: ""
	I1105 19:14:28.494574   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.494581   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:28.494588   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:28.494667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:28.525606   74485 cri.go:89] found id: ""
	I1105 19:14:28.525632   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.525639   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:28.525645   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:28.525696   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:28.558599   74485 cri.go:89] found id: ""
	I1105 19:14:28.558628   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.558638   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:28.558644   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:28.558701   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:28.590496   74485 cri.go:89] found id: ""
	I1105 19:14:28.590522   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.590530   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:28.590535   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:28.590599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:28.622748   74485 cri.go:89] found id: ""
	I1105 19:14:28.622772   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.622780   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:28.622786   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:28.622836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:28.656452   74485 cri.go:89] found id: ""
	I1105 19:14:28.656477   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.656485   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:28.656493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:28.656504   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.736458   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:28.736505   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:28.771923   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:28.771954   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:28.821099   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:28.821133   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:28.834698   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:28.834726   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:28.900543   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.400733   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:31.414573   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:31.414647   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:31.452244   74485 cri.go:89] found id: ""
	I1105 19:14:31.452275   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.452286   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:31.452293   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:31.452353   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:31.485898   74485 cri.go:89] found id: ""
	I1105 19:14:31.485920   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.485935   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:31.485940   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:31.486009   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:31.522826   74485 cri.go:89] found id: ""
	I1105 19:14:31.522850   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.522858   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:31.522865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:31.522925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:31.560096   74485 cri.go:89] found id: ""
	I1105 19:14:31.560136   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.560164   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:31.560174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:31.560234   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:31.596698   74485 cri.go:89] found id: ""
	I1105 19:14:31.596725   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.596733   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:31.596738   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:31.596792   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:31.635109   74485 cri.go:89] found id: ""
	I1105 19:14:31.635138   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.635148   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:31.635156   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:31.635221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:31.667612   74485 cri.go:89] found id: ""
	I1105 19:14:31.667639   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.667651   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:31.667658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:31.667726   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:31.699815   74485 cri.go:89] found id: ""
	I1105 19:14:31.699844   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.699854   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:31.699864   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:31.699879   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:31.737165   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:31.737196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:31.788513   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:31.788550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:31.801580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:31.801609   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:31.871658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.871683   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:31.871696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.462108   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.961875   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:31.223977   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:33.724027   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.847090   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:32.847233   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.847857   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.450954   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:34.466129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:34.466204   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:34.499984   74485 cri.go:89] found id: ""
	I1105 19:14:34.500009   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.500020   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:34.500027   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:34.500091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:34.532923   74485 cri.go:89] found id: ""
	I1105 19:14:34.532950   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.532958   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:34.532969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:34.533017   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:34.566772   74485 cri.go:89] found id: ""
	I1105 19:14:34.566803   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.566811   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:34.566817   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:34.566872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:34.607398   74485 cri.go:89] found id: ""
	I1105 19:14:34.607422   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.607430   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:34.607435   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:34.607497   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:34.640091   74485 cri.go:89] found id: ""
	I1105 19:14:34.640123   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.640135   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:34.640143   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:34.640207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:34.677164   74485 cri.go:89] found id: ""
	I1105 19:14:34.677201   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.677211   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:34.677217   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:34.677266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:34.714900   74485 cri.go:89] found id: ""
	I1105 19:14:34.714931   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.714942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:34.714949   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:34.715023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:34.751003   74485 cri.go:89] found id: ""
	I1105 19:14:34.751032   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.751040   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:34.751048   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:34.751059   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:34.822279   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:34.822301   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:34.822315   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:34.898607   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:34.898640   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:34.934727   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:34.934754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:34.985935   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:34.985969   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.500117   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:37.512467   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:37.512541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:37.544914   74485 cri.go:89] found id: ""
	I1105 19:14:37.544941   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.544952   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:37.544959   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:37.545028   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:37.581507   74485 cri.go:89] found id: ""
	I1105 19:14:37.581535   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.581545   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:37.581553   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:37.581612   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:37.615546   74485 cri.go:89] found id: ""
	I1105 19:14:37.615576   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.615585   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:37.615592   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:37.615667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:37.648239   74485 cri.go:89] found id: ""
	I1105 19:14:37.648267   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.648276   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:37.648283   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:37.648343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:33.460860   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:35.461416   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:36.224852   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:38.725488   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.347563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:39.347732   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.682861   74485 cri.go:89] found id: ""
	I1105 19:14:37.682891   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.682898   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:37.682904   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:37.682952   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:37.715506   74485 cri.go:89] found id: ""
	I1105 19:14:37.715532   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.715540   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:37.715547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:37.715597   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:37.747973   74485 cri.go:89] found id: ""
	I1105 19:14:37.748003   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.748014   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:37.748022   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:37.748083   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:37.780270   74485 cri.go:89] found id: ""
	I1105 19:14:37.780294   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.780302   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:37.780310   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:37.780321   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.793885   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:37.793914   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:37.860114   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:37.860140   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:37.860154   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:37.941221   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:37.941255   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.980537   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:37.980567   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.532301   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:40.545540   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:40.545599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:40.578642   74485 cri.go:89] found id: ""
	I1105 19:14:40.578687   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.578699   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:40.578707   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:40.578772   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:40.612049   74485 cri.go:89] found id: ""
	I1105 19:14:40.612078   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.612089   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:40.612097   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:40.612159   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:40.644495   74485 cri.go:89] found id: ""
	I1105 19:14:40.644519   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.644527   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:40.644532   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:40.644587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:40.676890   74485 cri.go:89] found id: ""
	I1105 19:14:40.676923   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.676931   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:40.676937   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:40.676984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:40.710095   74485 cri.go:89] found id: ""
	I1105 19:14:40.710125   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.710136   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:40.710144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:40.710200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:40.748323   74485 cri.go:89] found id: ""
	I1105 19:14:40.748353   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.748364   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:40.748372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:40.748501   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:40.781578   74485 cri.go:89] found id: ""
	I1105 19:14:40.781606   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.781618   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:40.781626   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:40.781689   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:40.816010   74485 cri.go:89] found id: ""
	I1105 19:14:40.816048   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.816060   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:40.816071   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:40.816086   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.869836   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:40.869876   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:40.883436   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:40.883471   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:40.946538   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:40.946566   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:40.946585   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:41.023085   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:41.023123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.962163   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.461278   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.726894   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.224939   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:41.847053   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:44.346789   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.566841   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:43.579425   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:43.579498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:43.620500   74485 cri.go:89] found id: ""
	I1105 19:14:43.620526   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.620535   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:43.620541   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:43.620600   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:43.652992   74485 cri.go:89] found id: ""
	I1105 19:14:43.653024   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.653035   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:43.653042   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:43.653105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:43.686945   74485 cri.go:89] found id: ""
	I1105 19:14:43.686991   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.687003   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:43.687010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:43.687124   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:43.720075   74485 cri.go:89] found id: ""
	I1105 19:14:43.720103   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.720114   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:43.720121   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:43.720179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:43.757969   74485 cri.go:89] found id: ""
	I1105 19:14:43.757997   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.758005   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:43.758011   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:43.758071   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:43.790068   74485 cri.go:89] found id: ""
	I1105 19:14:43.790094   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.790103   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:43.790109   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:43.790153   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:43.821696   74485 cri.go:89] found id: ""
	I1105 19:14:43.821722   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.821733   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:43.821741   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:43.821803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:43.855976   74485 cri.go:89] found id: ""
	I1105 19:14:43.856003   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.856011   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:43.856019   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:43.856029   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:43.934375   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:43.934409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:43.972567   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:43.972597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:44.025660   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:44.025696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:44.039229   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:44.039258   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:44.112179   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:46.612815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:46.626070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:46.626145   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:46.659184   74485 cri.go:89] found id: ""
	I1105 19:14:46.659210   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.659218   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:46.659227   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:46.659288   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:46.691887   74485 cri.go:89] found id: ""
	I1105 19:14:46.691917   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.691928   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:46.691934   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:46.692003   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:46.725745   74485 cri.go:89] found id: ""
	I1105 19:14:46.725776   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.725787   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:46.725795   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:46.725847   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:46.761733   74485 cri.go:89] found id: ""
	I1105 19:14:46.761762   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.761773   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:46.761780   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:46.761842   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:46.792926   74485 cri.go:89] found id: ""
	I1105 19:14:46.792955   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.792966   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:46.792974   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:46.793036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:46.824462   74485 cri.go:89] found id: ""
	I1105 19:14:46.824503   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.824512   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:46.824519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:46.824580   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:46.865057   74485 cri.go:89] found id: ""
	I1105 19:14:46.865082   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.865090   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:46.865095   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:46.865146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:46.901357   74485 cri.go:89] found id: ""
	I1105 19:14:46.901385   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.901393   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:46.901401   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:46.901414   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:46.951986   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:46.952021   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:46.966035   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:46.966065   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:47.035163   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:47.035184   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:47.035196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:47.115825   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:47.115860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:42.961397   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.460846   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.724189   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.724319   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:46.847553   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.346787   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.658737   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:49.672088   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:49.672182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:49.708638   74485 cri.go:89] found id: ""
	I1105 19:14:49.708666   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.708674   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:49.708679   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:49.708736   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:49.744485   74485 cri.go:89] found id: ""
	I1105 19:14:49.744513   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.744521   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:49.744526   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:49.744572   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:49.779758   74485 cri.go:89] found id: ""
	I1105 19:14:49.779785   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.779794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:49.779800   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:49.779858   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:49.814216   74485 cri.go:89] found id: ""
	I1105 19:14:49.814248   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.814256   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:49.814262   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:49.814310   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:49.851348   74485 cri.go:89] found id: ""
	I1105 19:14:49.851377   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.851389   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:49.851396   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:49.851455   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:49.883866   74485 cri.go:89] found id: ""
	I1105 19:14:49.883897   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.883906   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:49.883912   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:49.883959   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:49.916944   74485 cri.go:89] found id: ""
	I1105 19:14:49.916967   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.916975   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:49.916980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:49.917039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:49.950405   74485 cri.go:89] found id: ""
	I1105 19:14:49.950437   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.950449   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:49.950459   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:49.950475   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:49.996064   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:49.996102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:50.044865   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:50.044902   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:50.058206   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:50.058236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:50.130371   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:50.130397   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:50.130412   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:49.960550   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.961271   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.724896   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.224128   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.346823   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:53.847102   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.706441   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:52.719571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:52.719655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:52.753850   74485 cri.go:89] found id: ""
	I1105 19:14:52.753880   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.753891   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:52.753899   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:52.753961   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:52.794112   74485 cri.go:89] found id: ""
	I1105 19:14:52.794139   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.794149   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:52.794156   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:52.794218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:52.830151   74485 cri.go:89] found id: ""
	I1105 19:14:52.830178   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.830188   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:52.830195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:52.830258   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:52.864803   74485 cri.go:89] found id: ""
	I1105 19:14:52.864832   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.864853   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:52.864868   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:52.864930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:52.897237   74485 cri.go:89] found id: ""
	I1105 19:14:52.897271   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.897282   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:52.897289   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:52.897351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:52.932236   74485 cri.go:89] found id: ""
	I1105 19:14:52.932262   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.932270   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:52.932275   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:52.932319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:52.965781   74485 cri.go:89] found id: ""
	I1105 19:14:52.965808   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.965817   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:52.965825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:52.965918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:52.999098   74485 cri.go:89] found id: ""
	I1105 19:14:52.999121   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.999129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:52.999137   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:52.999146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:53.051085   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:53.051127   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:53.064690   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:53.064717   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:53.128334   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:53.128358   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:53.128372   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:53.207751   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:53.207791   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:55.745430   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:55.758734   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:55.758821   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:55.791827   74485 cri.go:89] found id: ""
	I1105 19:14:55.791854   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.791862   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:55.791868   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:55.791922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:55.824191   74485 cri.go:89] found id: ""
	I1105 19:14:55.824217   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.824224   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:55.824230   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:55.824278   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:55.858579   74485 cri.go:89] found id: ""
	I1105 19:14:55.858611   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.858619   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:55.858625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:55.858673   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:55.891579   74485 cri.go:89] found id: ""
	I1105 19:14:55.891604   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.891612   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:55.891617   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:55.891663   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:55.924881   74485 cri.go:89] found id: ""
	I1105 19:14:55.924910   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.924920   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:55.924930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:55.924999   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:55.956634   74485 cri.go:89] found id: ""
	I1105 19:14:55.956663   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.956678   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:55.956686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:55.956742   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:55.988770   74485 cri.go:89] found id: ""
	I1105 19:14:55.988803   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.988814   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:55.988821   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:55.988880   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:56.022236   74485 cri.go:89] found id: ""
	I1105 19:14:56.022257   74485 logs.go:282] 0 containers: []
	W1105 19:14:56.022266   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:56.022273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:56.022284   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:56.073035   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:56.073071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:56.086899   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:56.086923   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:56.158219   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:56.158247   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:56.158259   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:56.246621   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:56.246660   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:53.962537   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.461516   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:54.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.725381   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:59.223995   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:55.847591   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.346027   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:00.349718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.791443   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:58.804398   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:58.804476   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:58.837812   74485 cri.go:89] found id: ""
	I1105 19:14:58.837840   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.837856   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:58.837863   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:58.837926   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:58.870154   74485 cri.go:89] found id: ""
	I1105 19:14:58.870186   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.870197   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:58.870204   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:58.870268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:58.906518   74485 cri.go:89] found id: ""
	I1105 19:14:58.906545   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.906553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:58.906563   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:58.906614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:58.939320   74485 cri.go:89] found id: ""
	I1105 19:14:58.939346   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.939357   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:58.939364   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:58.939426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:58.974116   74485 cri.go:89] found id: ""
	I1105 19:14:58.974143   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.974153   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:58.974160   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:58.974221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:59.006820   74485 cri.go:89] found id: ""
	I1105 19:14:59.006854   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.006866   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:59.006873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:59.006933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:59.039691   74485 cri.go:89] found id: ""
	I1105 19:14:59.039723   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.039735   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:59.039742   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:59.039800   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:59.071829   74485 cri.go:89] found id: ""
	I1105 19:14:59.071860   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.071881   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:59.071893   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:59.071906   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:59.124158   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:59.124195   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:59.138563   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:59.138594   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:59.216148   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:59.216174   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:59.216189   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:59.295262   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:59.295297   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:01.833789   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:01.847332   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:01.847408   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:01.882721   74485 cri.go:89] found id: ""
	I1105 19:15:01.882743   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.882750   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:01.882755   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:01.882811   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:01.916457   74485 cri.go:89] found id: ""
	I1105 19:15:01.916479   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.916487   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:01.916502   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:01.916557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:01.950521   74485 cri.go:89] found id: ""
	I1105 19:15:01.950552   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.950564   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:01.950571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:01.950624   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:01.985823   74485 cri.go:89] found id: ""
	I1105 19:15:01.985852   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.985862   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:01.985870   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:01.985918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:02.021689   74485 cri.go:89] found id: ""
	I1105 19:15:02.021720   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.021731   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:02.021739   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:02.021804   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:02.058632   74485 cri.go:89] found id: ""
	I1105 19:15:02.058658   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.058666   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:02.058672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:02.058738   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:02.097916   74485 cri.go:89] found id: ""
	I1105 19:15:02.097947   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.097956   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:02.097961   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:02.098010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:02.131992   74485 cri.go:89] found id: ""
	I1105 19:15:02.132027   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.132038   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:02.132050   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:02.132066   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:02.188605   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:02.188645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:02.201873   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:02.201904   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:02.274767   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:02.274795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:02.274811   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:02.358520   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:02.358559   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:58.962072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.461009   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.224719   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:03.724333   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:02.847593   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.348665   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:04.897693   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:04.913131   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:04.913207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:04.952546   74485 cri.go:89] found id: ""
	I1105 19:15:04.952571   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.952579   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:04.952584   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:04.952643   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:04.987334   74485 cri.go:89] found id: ""
	I1105 19:15:04.987360   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.987368   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:04.987374   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:04.987434   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:05.021873   74485 cri.go:89] found id: ""
	I1105 19:15:05.021906   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.021919   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:05.021926   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:05.021985   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:05.056169   74485 cri.go:89] found id: ""
	I1105 19:15:05.056199   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.056208   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:05.056213   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:05.056265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:05.093090   74485 cri.go:89] found id: ""
	I1105 19:15:05.093117   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.093125   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:05.093130   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:05.093182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:05.127533   74485 cri.go:89] found id: ""
	I1105 19:15:05.127557   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.127564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:05.127576   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:05.127625   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:05.165127   74485 cri.go:89] found id: ""
	I1105 19:15:05.165162   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.165173   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:05.165180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:05.165243   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:05.200526   74485 cri.go:89] found id: ""
	I1105 19:15:05.200556   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.200567   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:05.200578   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:05.200593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:05.247497   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:05.247535   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:05.261963   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:05.261996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:05.336813   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:05.336833   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:05.336844   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:05.412278   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:05.412320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:03.461266   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.463142   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.728530   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:08.227700   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.848748   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:10.346754   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.951085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:07.966125   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:07.966203   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:08.004253   74485 cri.go:89] found id: ""
	I1105 19:15:08.004291   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.004302   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:08.004310   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:08.004373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:08.039539   74485 cri.go:89] found id: ""
	I1105 19:15:08.039562   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.039569   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:08.039575   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:08.039629   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:08.076043   74485 cri.go:89] found id: ""
	I1105 19:15:08.076080   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.076093   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:08.076101   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:08.076157   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:08.110489   74485 cri.go:89] found id: ""
	I1105 19:15:08.110512   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.110519   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:08.110525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:08.110589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:08.147532   74485 cri.go:89] found id: ""
	I1105 19:15:08.147564   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.147574   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:08.147580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:08.147628   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:08.182225   74485 cri.go:89] found id: ""
	I1105 19:15:08.182248   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.182256   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:08.182263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:08.182322   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:08.223488   74485 cri.go:89] found id: ""
	I1105 19:15:08.223524   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.223536   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:08.223544   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:08.223610   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:08.266524   74485 cri.go:89] found id: ""
	I1105 19:15:08.266559   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.266571   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:08.266582   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:08.266597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:08.279036   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:08.279061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:08.346030   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:08.346052   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:08.346064   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:08.428081   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:08.428118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:08.464760   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:08.464789   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.016193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:11.030598   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:11.030681   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:11.066035   74485 cri.go:89] found id: ""
	I1105 19:15:11.066064   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.066073   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:11.066078   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:11.066133   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:11.103906   74485 cri.go:89] found id: ""
	I1105 19:15:11.103937   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.103948   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:11.103955   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:11.104023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:11.142936   74485 cri.go:89] found id: ""
	I1105 19:15:11.143024   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.143034   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:11.143041   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:11.143091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:11.180041   74485 cri.go:89] found id: ""
	I1105 19:15:11.180074   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.180086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:11.180094   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:11.180158   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:11.215661   74485 cri.go:89] found id: ""
	I1105 19:15:11.215693   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.215701   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:11.215707   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:11.215758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:11.252603   74485 cri.go:89] found id: ""
	I1105 19:15:11.252651   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.252663   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:11.252672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:11.252739   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:11.299295   74485 cri.go:89] found id: ""
	I1105 19:15:11.299328   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.299340   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:11.299347   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:11.299402   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:11.355153   74485 cri.go:89] found id: ""
	I1105 19:15:11.355177   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.355185   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:11.355193   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:11.355206   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:11.441076   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:11.441118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:11.480367   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:11.480396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.534646   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:11.534683   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:11.548141   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:11.548170   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:11.616452   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:07.961073   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:09.962118   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.455874   73732 pod_ready.go:82] duration metric: took 4m0.000853559s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:12.455911   73732 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:15:12.455936   73732 pod_ready.go:39] duration metric: took 4m14.55377544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:12.455984   73732 kubeadm.go:597] duration metric: took 4m23.030552871s to restartPrimaryControlPlane
	W1105 19:15:12.456078   73732 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:12.456111   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:10.724247   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.725886   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.846646   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.848074   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.117448   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:14.131224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:14.131297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:14.167811   74485 cri.go:89] found id: ""
	I1105 19:15:14.167843   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.167855   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:14.167862   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:14.167921   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:14.204128   74485 cri.go:89] found id: ""
	I1105 19:15:14.204156   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.204164   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:14.204169   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:14.204232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:14.240687   74485 cri.go:89] found id: ""
	I1105 19:15:14.240716   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.240727   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:14.240735   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:14.240788   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:14.274204   74485 cri.go:89] found id: ""
	I1105 19:15:14.274231   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.274242   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:14.274249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:14.274307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:14.312090   74485 cri.go:89] found id: ""
	I1105 19:15:14.312119   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.312130   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:14.312139   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:14.312200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:14.346824   74485 cri.go:89] found id: ""
	I1105 19:15:14.346857   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.346868   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:14.346875   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:14.346934   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:14.380634   74485 cri.go:89] found id: ""
	I1105 19:15:14.380668   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.380679   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:14.380686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:14.380746   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:14.414402   74485 cri.go:89] found id: ""
	I1105 19:15:14.414432   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.414441   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:14.414449   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:14.414459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:14.464542   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:14.464581   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:14.478195   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:14.478225   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:14.553670   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:14.553693   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:14.553708   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:14.634619   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:14.634659   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.174085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:17.191712   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:17.191771   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:17.234101   74485 cri.go:89] found id: ""
	I1105 19:15:17.234132   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.234143   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:17.234149   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:17.234213   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:17.281548   74485 cri.go:89] found id: ""
	I1105 19:15:17.281574   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.281581   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:17.281588   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:17.281655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:17.337698   74485 cri.go:89] found id: ""
	I1105 19:15:17.337727   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.337735   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:17.337743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:17.337790   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:17.371756   74485 cri.go:89] found id: ""
	I1105 19:15:17.371782   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.371790   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:17.371796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:17.371854   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:17.404989   74485 cri.go:89] found id: ""
	I1105 19:15:17.405015   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.405026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:17.405033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:17.405096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:17.438613   74485 cri.go:89] found id: ""
	I1105 19:15:17.438637   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.438648   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:17.438656   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:17.438717   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:17.470465   74485 cri.go:89] found id: ""
	I1105 19:15:17.470494   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.470502   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:17.470508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:17.470558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:17.503835   74485 cri.go:89] found id: ""
	I1105 19:15:17.503867   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.503876   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:17.503884   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:17.503896   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:17.584110   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:17.584146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.626928   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:17.626955   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:15.223749   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.225434   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.347847   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:19.847047   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.679356   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:17.679397   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:17.693476   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:17.693506   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:17.766809   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.266926   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:20.282219   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:20.282293   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:20.322133   74485 cri.go:89] found id: ""
	I1105 19:15:20.322163   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.322171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:20.322178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:20.322248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:20.357030   74485 cri.go:89] found id: ""
	I1105 19:15:20.357072   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.357084   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:20.357091   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:20.357156   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:20.390523   74485 cri.go:89] found id: ""
	I1105 19:15:20.390549   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.390559   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:20.390567   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:20.390631   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:20.425807   74485 cri.go:89] found id: ""
	I1105 19:15:20.425830   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.425837   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:20.425843   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:20.425903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:20.461984   74485 cri.go:89] found id: ""
	I1105 19:15:20.462014   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.462026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:20.462033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:20.462094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:20.495689   74485 cri.go:89] found id: ""
	I1105 19:15:20.495725   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.495739   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:20.495746   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:20.495799   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:20.528666   74485 cri.go:89] found id: ""
	I1105 19:15:20.528701   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.528713   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:20.528721   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:20.528783   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:20.562566   74485 cri.go:89] found id: ""
	I1105 19:15:20.562596   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.562606   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:20.562614   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:20.562624   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:20.610961   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:20.611000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:20.623898   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:20.623928   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:20.696412   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.696440   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:20.696456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:20.779601   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:20.779642   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:19.725198   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.224019   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.225286   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.347992   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.846718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:23.319846   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:23.333278   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:23.333357   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:23.370771   74485 cri.go:89] found id: ""
	I1105 19:15:23.370796   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.370805   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:23.370810   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:23.370872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:23.405994   74485 cri.go:89] found id: ""
	I1105 19:15:23.406021   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.406029   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:23.406034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:23.406092   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:23.443729   74485 cri.go:89] found id: ""
	I1105 19:15:23.443757   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.443767   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:23.443774   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:23.443836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:23.476162   74485 cri.go:89] found id: ""
	I1105 19:15:23.476188   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.476197   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:23.476205   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:23.476266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:23.509325   74485 cri.go:89] found id: ""
	I1105 19:15:23.509353   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.509363   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:23.509371   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:23.509427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:23.541880   74485 cri.go:89] found id: ""
	I1105 19:15:23.541912   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.541922   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:23.541929   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:23.541993   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:23.574204   74485 cri.go:89] found id: ""
	I1105 19:15:23.574236   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.574248   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:23.574256   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:23.574323   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:23.606865   74485 cri.go:89] found id: ""
	I1105 19:15:23.606896   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.606908   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:23.606918   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:23.606932   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:23.673771   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:23.673792   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:23.673803   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:23.753298   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:23.753335   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:23.792273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:23.792304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:23.843072   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:23.843110   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.356859   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:26.369417   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:26.369488   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:26.403611   74485 cri.go:89] found id: ""
	I1105 19:15:26.403639   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.403647   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:26.403653   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:26.403725   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:26.439891   74485 cri.go:89] found id: ""
	I1105 19:15:26.439924   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.439936   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:26.439943   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:26.439991   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:26.473502   74485 cri.go:89] found id: ""
	I1105 19:15:26.473542   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.473554   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:26.473561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:26.473640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:26.505666   74485 cri.go:89] found id: ""
	I1105 19:15:26.505695   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.505703   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:26.505710   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:26.505769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:26.539781   74485 cri.go:89] found id: ""
	I1105 19:15:26.539815   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.539827   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:26.539835   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:26.539911   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:26.574673   74485 cri.go:89] found id: ""
	I1105 19:15:26.574712   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.574721   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:26.574727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:26.574773   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:26.608410   74485 cri.go:89] found id: ""
	I1105 19:15:26.608433   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.608441   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:26.608446   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:26.608494   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:26.644036   74485 cri.go:89] found id: ""
	I1105 19:15:26.644065   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.644076   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:26.644087   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:26.644098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.718901   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:26.718937   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:26.758920   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:26.758953   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:26.811241   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:26.811277   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.824931   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:26.824961   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:26.891799   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:26.725062   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:27.724594   74141 pod_ready.go:82] duration metric: took 4m0.006622979s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:27.724627   74141 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1105 19:15:27.724644   74141 pod_ready.go:39] duration metric: took 4m0.807889519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:27.724663   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:15:27.724711   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:27.724769   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:27.771870   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:27.771897   74141 cri.go:89] found id: ""
	I1105 19:15:27.771906   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:27.771966   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.776484   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:27.776553   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:27.823529   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:27.823564   74141 cri.go:89] found id: ""
	I1105 19:15:27.823576   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:27.823638   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.828600   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:27.828685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:27.878206   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:27.878242   74141 cri.go:89] found id: ""
	I1105 19:15:27.878254   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:27.878317   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.882545   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:27.882640   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:27.920102   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:27.920127   74141 cri.go:89] found id: ""
	I1105 19:15:27.920137   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:27.920189   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.924516   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:27.924593   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:27.969493   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:27.969523   74141 cri.go:89] found id: ""
	I1105 19:15:27.969534   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:27.969589   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.973637   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:27.973724   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:28.014369   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.014396   74141 cri.go:89] found id: ""
	I1105 19:15:28.014405   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:28.014463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.019040   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:28.019112   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:28.056411   74141 cri.go:89] found id: ""
	I1105 19:15:28.056438   74141 logs.go:282] 0 containers: []
	W1105 19:15:28.056446   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:28.056452   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:28.056502   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:28.099541   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.099562   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.099566   74141 cri.go:89] found id: ""
	I1105 19:15:28.099573   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:28.099628   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.104144   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.108443   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:28.108465   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.153262   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:28.153302   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.197210   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:28.197237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:28.242915   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:28.242944   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:28.257468   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:28.257497   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:28.299795   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:28.299825   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:28.333983   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:28.334015   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:28.369174   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:28.369202   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:28.405838   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:28.405869   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:28.477842   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:28.477880   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:28.595832   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:28.595865   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:28.639146   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:28.639179   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.689519   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:28.689554   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.846977   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:28.847878   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:29.392417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:29.405249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:29.405331   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:29.437397   74485 cri.go:89] found id: ""
	I1105 19:15:29.437432   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.437443   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:29.437450   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:29.437504   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:29.469908   74485 cri.go:89] found id: ""
	I1105 19:15:29.469938   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.469946   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:29.469951   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:29.470008   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:29.502302   74485 cri.go:89] found id: ""
	I1105 19:15:29.502331   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.502339   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:29.502345   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:29.502391   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:29.534285   74485 cri.go:89] found id: ""
	I1105 19:15:29.534309   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.534317   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:29.534322   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:29.534373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:29.571918   74485 cri.go:89] found id: ""
	I1105 19:15:29.571962   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.571973   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:29.571983   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:29.572042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:29.605324   74485 cri.go:89] found id: ""
	I1105 19:15:29.605354   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.605365   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:29.605373   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:29.605435   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:29.640181   74485 cri.go:89] found id: ""
	I1105 19:15:29.640210   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.640218   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:29.640224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:29.640273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:29.671121   74485 cri.go:89] found id: ""
	I1105 19:15:29.671147   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.671155   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:29.671164   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:29.671174   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:29.750821   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:29.750856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:29.787452   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:29.787479   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:29.840413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:29.840459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:29.855540   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:29.855580   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:29.925849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:32.426016   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:32.438759   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:32.438824   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:32.476376   74485 cri.go:89] found id: ""
	I1105 19:15:32.476406   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.476416   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:32.476423   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:32.476490   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:32.512328   74485 cri.go:89] found id: ""
	I1105 19:15:32.512352   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.512360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:32.512365   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:32.512414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:32.546803   74485 cri.go:89] found id: ""
	I1105 19:15:32.546833   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.546844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:32.546851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:32.546925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:32.585904   74485 cri.go:89] found id: ""
	I1105 19:15:32.585934   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.585946   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:32.585953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:32.586014   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:32.620976   74485 cri.go:89] found id: ""
	I1105 19:15:32.621005   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.621012   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:32.621018   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:32.621082   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.668028   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:31.684024   74141 api_server.go:72] duration metric: took 4m12.496021782s to wait for apiserver process to appear ...
	I1105 19:15:31.684060   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:15:31.684105   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:31.684163   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:31.719462   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:31.719496   74141 cri.go:89] found id: ""
	I1105 19:15:31.719506   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:31.719559   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.723632   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:31.723702   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:31.761976   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:31.762001   74141 cri.go:89] found id: ""
	I1105 19:15:31.762010   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:31.762067   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.766066   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:31.766137   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:31.799673   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:31.799694   74141 cri.go:89] found id: ""
	I1105 19:15:31.799701   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:31.799753   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.803632   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:31.803714   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:31.841782   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:31.841808   74141 cri.go:89] found id: ""
	I1105 19:15:31.841818   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:31.841873   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.850409   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:31.850471   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:31.891932   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:31.891959   74141 cri.go:89] found id: ""
	I1105 19:15:31.891969   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:31.892026   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.896065   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:31.896125   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.932759   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:31.932781   74141 cri.go:89] found id: ""
	I1105 19:15:31.932788   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:31.932831   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.936611   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:31.936685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:31.971296   74141 cri.go:89] found id: ""
	I1105 19:15:31.971328   74141 logs.go:282] 0 containers: []
	W1105 19:15:31.971339   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:31.971348   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:31.971410   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:32.006153   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:32.006173   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.006177   74141 cri.go:89] found id: ""
	I1105 19:15:32.006184   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:32.006226   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.010159   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.013807   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.013831   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.084222   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:32.084273   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:32.127875   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:32.127928   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:32.173008   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:32.173041   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:32.235366   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.235402   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.714822   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:32.714861   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.750733   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.750761   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.796233   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.796264   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.809269   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.809296   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:32.931162   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:32.931196   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:32.968551   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:32.968578   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:33.008115   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:33.008152   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:33.046201   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:33.046237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:31.346652   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:33.347118   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:32.658958   74485 cri.go:89] found id: ""
	I1105 19:15:32.659006   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.659018   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:32.659026   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:32.659091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:32.694317   74485 cri.go:89] found id: ""
	I1105 19:15:32.694341   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.694349   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:32.694354   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:32.694403   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:32.728277   74485 cri.go:89] found id: ""
	I1105 19:15:32.728314   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.728327   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:32.728338   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.728352   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.815579   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.815615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.856776   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.856807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.909477   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.909518   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.923789   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.923817   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:32.997898   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:35.498040   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:35.511537   74485 kubeadm.go:597] duration metric: took 4m4.46832509s to restartPrimaryControlPlane
	W1105 19:15:35.511612   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:35.511644   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:35.586678   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:15:35.591512   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:15:35.592489   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:15:35.592507   74141 api_server.go:131] duration metric: took 3.908440367s to wait for apiserver health ...
	I1105 19:15:35.592514   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:15:35.592538   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:35.592589   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:35.636389   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.636408   74141 cri.go:89] found id: ""
	I1105 19:15:35.636416   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:35.636463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.640778   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:35.640839   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:35.676793   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:35.676818   74141 cri.go:89] found id: ""
	I1105 19:15:35.676828   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:35.676890   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.681596   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:35.681669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:35.721728   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:35.721754   74141 cri.go:89] found id: ""
	I1105 19:15:35.721763   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:35.721808   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.725619   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:35.725677   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:35.765348   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:35.765377   74141 cri.go:89] found id: ""
	I1105 19:15:35.765386   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:35.765439   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.769594   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:35.769669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:35.809427   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:35.809452   74141 cri.go:89] found id: ""
	I1105 19:15:35.809460   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:35.809505   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.814317   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:35.814376   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:35.853861   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:35.853882   74141 cri.go:89] found id: ""
	I1105 19:15:35.853890   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:35.853934   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.857734   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:35.857787   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:35.897791   74141 cri.go:89] found id: ""
	I1105 19:15:35.897816   74141 logs.go:282] 0 containers: []
	W1105 19:15:35.897824   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:35.897830   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:35.897887   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:35.940906   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:35.940940   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:35.940946   74141 cri.go:89] found id: ""
	I1105 19:15:35.940954   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:35.941006   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.945200   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.948860   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:35.948884   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.992660   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:35.992690   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:36.033586   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:36.033617   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:36.066599   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:36.066643   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:36.104895   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:36.104932   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:36.489747   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:36.489781   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:36.531923   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:36.531952   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:36.598718   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:36.598758   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:36.612969   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:36.612998   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:36.718535   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:36.718568   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:36.755636   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:36.755677   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:36.815561   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:36.815640   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:36.850878   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:36.850904   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:39.390699   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:15:39.390733   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.390738   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.390743   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.390747   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.390750   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.390753   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.390760   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.390764   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.390771   74141 system_pods.go:74] duration metric: took 3.798251189s to wait for pod list to return data ...
	I1105 19:15:39.390777   74141 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:15:39.393894   74141 default_sa.go:45] found service account: "default"
	I1105 19:15:39.393914   74141 default_sa.go:55] duration metric: took 3.132788ms for default service account to be created ...
	I1105 19:15:39.393929   74141 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:15:39.398455   74141 system_pods.go:86] 8 kube-system pods found
	I1105 19:15:39.398480   74141 system_pods.go:89] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.398485   74141 system_pods.go:89] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.398490   74141 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.398494   74141 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.398497   74141 system_pods.go:89] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.398501   74141 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.398508   74141 system_pods.go:89] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.398512   74141 system_pods.go:89] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.398520   74141 system_pods.go:126] duration metric: took 4.586494ms to wait for k8s-apps to be running ...
	I1105 19:15:39.398529   74141 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:15:39.398569   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.413878   74141 system_svc.go:56] duration metric: took 15.340417ms WaitForService to wait for kubelet
	I1105 19:15:39.413908   74141 kubeadm.go:582] duration metric: took 4m20.225910976s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:15:39.413936   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:15:39.416851   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:15:39.416870   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:15:39.416880   74141 node_conditions.go:105] duration metric: took 2.939584ms to run NodePressure ...
	I1105 19:15:39.416891   74141 start.go:241] waiting for startup goroutines ...
	I1105 19:15:39.416899   74141 start.go:246] waiting for cluster config update ...
	I1105 19:15:39.416911   74141 start.go:255] writing updated cluster config ...
	I1105 19:15:39.417211   74141 ssh_runner.go:195] Run: rm -f paused
	I1105 19:15:39.463773   74141 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:15:39.465688   74141 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-608095" cluster and "default" namespace by default
	I1105 19:15:39.702249   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.19058336s)
	I1105 19:15:39.702314   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.717966   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:39.728114   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:39.740451   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:39.740476   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:39.740519   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:39.751089   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:39.751150   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:39.761832   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:39.771841   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:39.771904   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:39.782332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.792379   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:39.792438   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.801625   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:39.811691   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:39.811740   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:39.821162   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:39.891377   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:15:39.891443   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:40.034176   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:40.034337   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:40.034476   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:15:40.211588   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:35.847491   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:38.346965   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.348252   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.213724   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:40.213838   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:40.213939   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:40.214048   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:40.214172   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:40.214266   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:40.214375   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:40.214478   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:40.214567   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:40.214687   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:40.214819   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:40.214884   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:40.214980   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:40.358606   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:40.632263   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:40.766570   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:40.885914   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:40.902379   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:40.903647   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:40.903716   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:41.040274   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:41.042093   74485 out.go:235]   - Booting up control plane ...
	I1105 19:15:41.042222   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:41.048448   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:41.058445   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:41.059466   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:41.062648   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:15:38.649673   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193536212s)
	I1105 19:15:38.649753   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:38.665214   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:38.674520   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:38.684078   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:38.684102   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:38.684151   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:38.693169   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:38.693239   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:38.702305   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:38.710796   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:38.710868   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:38.719716   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.728090   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:38.728143   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.737219   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:38.745625   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:38.745692   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:38.754684   73732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:38.914343   73732 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:15:42.847011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:44.851431   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:47.368221   73732 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:15:47.368296   73732 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:47.368405   73732 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:47.368552   73732 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:47.368686   73732 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:15:47.368787   73732 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:47.370333   73732 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:47.370429   73732 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:47.370529   73732 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:47.370650   73732 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:47.370763   73732 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:47.370900   73732 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:47.371009   73732 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:47.371110   73732 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:47.371198   73732 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:47.371312   73732 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:47.371431   73732 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:47.371494   73732 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:47.371573   73732 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:47.371656   73732 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:47.371725   73732 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:15:47.371797   73732 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:47.371893   73732 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:47.371976   73732 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:47.372074   73732 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:47.372160   73732 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:47.374386   73732 out.go:235]   - Booting up control plane ...
	I1105 19:15:47.374503   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:47.374622   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:47.374707   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:47.374838   73732 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:47.374950   73732 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:47.375046   73732 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:47.375226   73732 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:15:47.375367   73732 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:15:47.375450   73732 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.124171ms
	I1105 19:15:47.375549   73732 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:15:47.375647   73732 kubeadm.go:310] [api-check] The API server is healthy after 5.001431223s
	I1105 19:15:47.375804   73732 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:15:47.375968   73732 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:15:47.376055   73732 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:15:47.376321   73732 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-271881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:15:47.376412   73732 kubeadm.go:310] [bootstrap-token] Using token: 2xak8n.owgv6oncwawjarav
	I1105 19:15:47.377766   73732 out.go:235]   - Configuring RBAC rules ...
	I1105 19:15:47.377911   73732 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:15:47.378024   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:15:47.378138   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:15:47.378243   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:15:47.378337   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:15:47.378408   73732 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:15:47.378502   73732 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:15:47.378541   73732 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:15:47.378580   73732 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:15:47.378587   73732 kubeadm.go:310] 
	I1105 19:15:47.378635   73732 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:15:47.378645   73732 kubeadm.go:310] 
	I1105 19:15:47.378711   73732 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:15:47.378718   73732 kubeadm.go:310] 
	I1105 19:15:47.378760   73732 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:15:47.378813   73732 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:15:47.378856   73732 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:15:47.378860   73732 kubeadm.go:310] 
	I1105 19:15:47.378910   73732 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:15:47.378913   73732 kubeadm.go:310] 
	I1105 19:15:47.378955   73732 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:15:47.378959   73732 kubeadm.go:310] 
	I1105 19:15:47.379030   73732 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:15:47.379114   73732 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:15:47.379195   73732 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:15:47.379203   73732 kubeadm.go:310] 
	I1105 19:15:47.379320   73732 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:15:47.379427   73732 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:15:47.379442   73732 kubeadm.go:310] 
	I1105 19:15:47.379559   73732 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.379718   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:15:47.379762   73732 kubeadm.go:310] 	--control-plane 
	I1105 19:15:47.379770   73732 kubeadm.go:310] 
	I1105 19:15:47.379844   73732 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:15:47.379851   73732 kubeadm.go:310] 
	I1105 19:15:47.379977   73732 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.380150   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:15:47.380167   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:15:47.380174   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:15:47.381714   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:15:47.382944   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:15:47.394080   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:15:47.411715   73732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:15:47.411773   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.411821   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-271881 minikube.k8s.io/updated_at=2024_11_05T19_15_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=embed-certs-271881 minikube.k8s.io/primary=true
	I1105 19:15:47.439084   73732 ops.go:34] apiserver oom_adj: -16
	I1105 19:15:47.601691   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.348094   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:49.847296   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:48.102103   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:48.602767   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.101780   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.601826   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.101976   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.602763   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.102779   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.601930   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.102574   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.241636   73732 kubeadm.go:1113] duration metric: took 4.829922813s to wait for elevateKubeSystemPrivileges
	I1105 19:15:52.241680   73732 kubeadm.go:394] duration metric: took 5m2.866246993s to StartCluster
	I1105 19:15:52.241704   73732 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.241801   73732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:15:52.244409   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.244716   73732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:15:52.244789   73732 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:15:52.244893   73732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-271881"
	I1105 19:15:52.244914   73732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-271881"
	I1105 19:15:52.244911   73732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-271881"
	I1105 19:15:52.244933   73732 addons.go:69] Setting metrics-server=true in profile "embed-certs-271881"
	I1105 19:15:52.244941   73732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-271881"
	I1105 19:15:52.244954   73732 addons.go:234] Setting addon metrics-server=true in "embed-certs-271881"
	W1105 19:15:52.244965   73732 addons.go:243] addon metrics-server should already be in state true
	I1105 19:15:52.244998   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1105 19:15:52.244925   73732 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:15:52.245001   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245065   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245404   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245422   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245436   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245455   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245464   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245543   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.246341   73732 out.go:177] * Verifying Kubernetes components...
	I1105 19:15:52.247801   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:15:52.261802   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I1105 19:15:52.262325   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.262955   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.263159   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.263591   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.264367   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.264413   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.265696   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I1105 19:15:52.265941   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I1105 19:15:52.266161   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266322   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266776   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266782   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266800   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.266803   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.267185   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267224   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267353   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.267804   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.267846   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.271094   73732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-271881"
	W1105 19:15:52.271117   73732 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:15:52.271147   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.271509   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.271554   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.284180   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I1105 19:15:52.284456   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1105 19:15:52.284703   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.284925   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.285248   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285261   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285355   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285363   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285578   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285727   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285766   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.285862   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.287834   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.288259   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.290341   73732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:15:52.290346   73732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:15:52.290695   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I1105 19:15:52.291040   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.291464   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.291479   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.291776   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.291974   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:15:52.291994   73732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:15:52.292015   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292054   73732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.292067   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:15:52.292079   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292355   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.292400   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.295296   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295650   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.295675   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295701   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295797   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.295969   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296102   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296247   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.296272   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.296305   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.296582   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.296714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296848   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296947   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.314049   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I1105 19:15:52.314561   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.315148   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.315168   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.315884   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.316080   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.318146   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.318465   73732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.318478   73732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:15:52.318496   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.321312   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321825   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.321850   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321885   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.322095   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.322238   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.322397   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.453762   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:15:52.483722   73732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493492   73732 node_ready.go:49] node "embed-certs-271881" has status "Ready":"True"
	I1105 19:15:52.493519   73732 node_ready.go:38] duration metric: took 9.757528ms for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493530   73732 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:52.508208   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:15:52.577925   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.589366   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:15:52.589389   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:15:52.612570   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:15:52.612593   73732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:15:52.645851   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.647686   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:52.647713   73732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:15:52.668865   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:53.246894   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246918   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.246923   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246950   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247230   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247277   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247305   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247323   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247338   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247349   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247331   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247368   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247378   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247710   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247739   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247746   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247779   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247800   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247811   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.269143   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.269165   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.269465   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.269479   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.269483   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.494717   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.494741   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495080   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495100   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495114   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.495123   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495348   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.495394   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495414   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495427   73732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-271881"
	I1105 19:15:53.497126   73732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:15:52.347616   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:54.352434   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:53.498891   73732 addons.go:510] duration metric: took 1.254108253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:15:54.518219   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:57.015647   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:56.846198   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:58.847684   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:59.514759   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:01.514818   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:02.515124   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.515148   73732 pod_ready.go:82] duration metric: took 10.006914802s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.515158   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519864   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.519889   73732 pod_ready.go:82] duration metric: took 4.723101ms for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519900   73732 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524948   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.524970   73732 pod_ready.go:82] duration metric: took 5.063029ms for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524979   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529710   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.529739   73732 pod_ready.go:82] duration metric: took 4.753888ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529750   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534282   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.534301   73732 pod_ready.go:82] duration metric: took 4.544677ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534309   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912364   73732 pod_ready.go:93] pod "kube-proxy-nfxcj" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.912387   73732 pod_ready.go:82] duration metric: took 378.071939ms for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912397   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311793   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:03.311816   73732 pod_ready.go:82] duration metric: took 399.412502ms for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311822   73732 pod_ready.go:39] duration metric: took 10.818282425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:03.311836   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:16:03.311883   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:16:03.327913   73732 api_server.go:72] duration metric: took 11.083157176s to wait for apiserver process to appear ...
	I1105 19:16:03.327947   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:16:03.327968   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:16:03.334499   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:16:03.335530   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:16:03.335550   73732 api_server.go:131] duration metric: took 7.596072ms to wait for apiserver health ...
	I1105 19:16:03.335558   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:16:03.514782   73732 system_pods.go:59] 9 kube-system pods found
	I1105 19:16:03.514813   73732 system_pods.go:61] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.514820   73732 system_pods.go:61] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.514825   73732 system_pods.go:61] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.514830   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.514835   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.514840   73732 system_pods.go:61] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.514844   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.514854   73732 system_pods.go:61] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.514859   73732 system_pods.go:61] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.514868   73732 system_pods.go:74] duration metric: took 179.304519ms to wait for pod list to return data ...
	I1105 19:16:03.514877   73732 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:16:03.712690   73732 default_sa.go:45] found service account: "default"
	I1105 19:16:03.712719   73732 default_sa.go:55] duration metric: took 197.831177ms for default service account to be created ...
	I1105 19:16:03.712731   73732 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:16:03.916858   73732 system_pods.go:86] 9 kube-system pods found
	I1105 19:16:03.916893   73732 system_pods.go:89] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.916902   73732 system_pods.go:89] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.916908   73732 system_pods.go:89] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.916913   73732 system_pods.go:89] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.916918   73732 system_pods.go:89] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.916921   73732 system_pods.go:89] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.916924   73732 system_pods.go:89] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.916934   73732 system_pods.go:89] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.916941   73732 system_pods.go:89] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.916953   73732 system_pods.go:126] duration metric: took 204.215711ms to wait for k8s-apps to be running ...
	I1105 19:16:03.916963   73732 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:16:03.917019   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:03.931369   73732 system_svc.go:56] duration metric: took 14.397556ms WaitForService to wait for kubelet
	I1105 19:16:03.931407   73732 kubeadm.go:582] duration metric: took 11.686653516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:16:03.931454   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:16:04.111904   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:16:04.111928   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:16:04.111937   73732 node_conditions.go:105] duration metric: took 180.475073ms to run NodePressure ...
	I1105 19:16:04.111947   73732 start.go:241] waiting for startup goroutines ...
	I1105 19:16:04.111953   73732 start.go:246] waiting for cluster config update ...
	I1105 19:16:04.111962   73732 start.go:255] writing updated cluster config ...
	I1105 19:16:04.112197   73732 ssh_runner.go:195] Run: rm -f paused
	I1105 19:16:04.158775   73732 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:16:04.160801   73732 out.go:177] * Done! kubectl is now configured to use "embed-certs-271881" cluster and "default" namespace by default
	I1105 19:16:01.346039   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:03.346369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:05.846866   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:08.346383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:10.346570   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:12.347171   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:14.846335   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.840591   73496 pod_ready.go:82] duration metric: took 4m0.000143963s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	E1105 19:16:17.840620   73496 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:16:17.840649   73496 pod_ready.go:39] duration metric: took 4m11.022533189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:17.840682   73496 kubeadm.go:597] duration metric: took 4m18.432062793s to restartPrimaryControlPlane
	W1105 19:16:17.840732   73496 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:16:17.840755   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:16:21.064069   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:16:21.064607   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:21.064798   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:26.065202   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:26.065410   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:36.065932   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:36.066151   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:43.960239   73496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.119460606s)
	I1105 19:16:43.960324   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:43.986199   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:16:43.999287   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:16:44.013653   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:16:44.013675   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:16:44.013718   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:16:44.026073   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:16:44.026140   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:16:44.038723   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:16:44.050880   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:16:44.050957   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:16:44.061696   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.071739   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:16:44.072301   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.084030   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:16:44.093217   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:16:44.093275   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:16:44.102494   73496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:16:44.267623   73496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:16:52.534375   73496 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:16:52.534458   73496 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:16:52.534569   73496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:16:52.534704   73496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:16:52.534834   73496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:16:52.534930   73496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:16:52.536666   73496 out.go:235]   - Generating certificates and keys ...
	I1105 19:16:52.536759   73496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:16:52.536836   73496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:16:52.536911   73496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:16:52.536963   73496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:16:52.537060   73496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:16:52.537145   73496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:16:52.537232   73496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:16:52.537286   73496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:16:52.537361   73496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:16:52.537455   73496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:16:52.537500   73496 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:16:52.537578   73496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:16:52.537648   73496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:16:52.537725   73496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:16:52.537797   73496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:16:52.537905   73496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:16:52.537988   73496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:16:52.538075   73496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:16:52.538136   73496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:16:52.539588   73496 out.go:235]   - Booting up control plane ...
	I1105 19:16:52.539669   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:16:52.539743   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:16:52.539800   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:16:52.539885   73496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:16:52.539987   73496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:16:52.540057   73496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:16:52.540206   73496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:16:52.540300   73496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:16:52.540367   73496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733469ms
	I1105 19:16:52.540447   73496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:16:52.540528   73496 kubeadm.go:310] [api-check] The API server is healthy after 5.001962829s
	I1105 19:16:52.540651   73496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:16:52.540806   73496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:16:52.540899   73496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:16:52.541094   73496 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-459223 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:16:52.541164   73496 kubeadm.go:310] [bootstrap-token] Using token: f0bzzt.jihwqjda853aoxrb
	I1105 19:16:52.543528   73496 out.go:235]   - Configuring RBAC rules ...
	I1105 19:16:52.543658   73496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:16:52.543777   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:16:52.543942   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:16:52.544072   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:16:52.544222   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:16:52.544327   73496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:16:52.544453   73496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:16:52.544493   73496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:16:52.544536   73496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:16:52.544542   73496 kubeadm.go:310] 
	I1105 19:16:52.544593   73496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:16:52.544599   73496 kubeadm.go:310] 
	I1105 19:16:52.544687   73496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:16:52.544701   73496 kubeadm.go:310] 
	I1105 19:16:52.544739   73496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:16:52.544795   73496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:16:52.544855   73496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:16:52.544881   73496 kubeadm.go:310] 
	I1105 19:16:52.544958   73496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:16:52.544971   73496 kubeadm.go:310] 
	I1105 19:16:52.545039   73496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:16:52.545049   73496 kubeadm.go:310] 
	I1105 19:16:52.545111   73496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:16:52.545193   73496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:16:52.545251   73496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:16:52.545257   73496 kubeadm.go:310] 
	I1105 19:16:52.545324   73496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:16:52.545403   73496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:16:52.545409   73496 kubeadm.go:310] 
	I1105 19:16:52.545480   73496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.545605   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:16:52.545638   73496 kubeadm.go:310] 	--control-plane 
	I1105 19:16:52.545648   73496 kubeadm.go:310] 
	I1105 19:16:52.545779   73496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:16:52.545794   73496 kubeadm.go:310] 
	I1105 19:16:52.545903   73496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.546059   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:16:52.546074   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:16:52.546083   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:16:52.548357   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:16:52.549732   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:16:52.560406   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:16:52.577268   73496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:16:52.577334   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:52.577373   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-459223 minikube.k8s.io/updated_at=2024_11_05T19_16_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=no-preload-459223 minikube.k8s.io/primary=true
	I1105 19:16:52.776299   73496 ops.go:34] apiserver oom_adj: -16
	I1105 19:16:52.776456   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.276618   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.777474   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.276726   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.777004   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.276725   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.777410   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.893941   73496 kubeadm.go:1113] duration metric: took 3.316665512s to wait for elevateKubeSystemPrivileges
	I1105 19:16:55.893984   73496 kubeadm.go:394] duration metric: took 4m56.532038314s to StartCluster
	I1105 19:16:55.894007   73496 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.894104   73496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:16:55.896620   73496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.896934   73496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:16:55.897120   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:16:55.897056   73496 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:16:55.897166   73496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-459223"
	I1105 19:16:55.897176   73496 addons.go:69] Setting default-storageclass=true in profile "no-preload-459223"
	I1105 19:16:55.897186   73496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-459223"
	I1105 19:16:55.897193   73496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-459223"
	I1105 19:16:55.897211   73496 addons.go:69] Setting metrics-server=true in profile "no-preload-459223"
	I1105 19:16:55.897231   73496 addons.go:234] Setting addon metrics-server=true in "no-preload-459223"
	W1105 19:16:55.897243   73496 addons.go:243] addon metrics-server should already be in state true
	I1105 19:16:55.897271   73496 host.go:66] Checking if "no-preload-459223" exists ...
	W1105 19:16:55.897195   73496 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:16:55.897323   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.897599   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897642   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897705   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897754   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897711   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897811   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.898341   73496 out.go:177] * Verifying Kubernetes components...
	I1105 19:16:55.899778   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:16:55.914218   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1105 19:16:55.914305   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1105 19:16:55.914726   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.914837   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.915283   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915305   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915391   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915418   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915642   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915757   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915804   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.916323   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.916367   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.916858   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1105 19:16:55.917296   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.917805   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.917832   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.918156   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.918678   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.918720   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.919527   73496 addons.go:234] Setting addon default-storageclass=true in "no-preload-459223"
	W1105 19:16:55.919549   73496 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:16:55.919576   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.919954   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.919996   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.932547   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I1105 19:16:55.933026   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.933588   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.933601   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.933918   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.934153   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.936094   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.937415   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I1105 19:16:55.937800   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.937812   73496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:16:55.938312   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.938324   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.938420   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I1105 19:16:55.938661   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.938816   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.938867   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:16:55.938894   73496 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:16:55.938918   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.939014   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.939350   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.939362   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.939855   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.940281   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.940310   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.940959   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.942661   73496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:16:55.942797   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943216   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.943392   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943422   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.943588   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.943842   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.944078   73496 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:55.944083   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.944096   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:16:55.944114   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.947574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.947767   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.947789   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.948125   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.948249   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.948343   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.948424   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.987691   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I1105 19:16:55.988131   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.988714   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.988739   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.989127   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.989325   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.991207   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.991453   73496 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:55.991472   73496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:16:55.991492   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.994362   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994800   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.994846   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994938   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.995145   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.995315   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.996088   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:56.109142   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:16:56.126382   73496 node_ready.go:35] waiting up to 6m0s for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138050   73496 node_ready.go:49] node "no-preload-459223" has status "Ready":"True"
	I1105 19:16:56.138076   73496 node_ready.go:38] duration metric: took 11.661265ms for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138087   73496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:56.143325   73496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:56.230205   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:16:56.230228   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:16:56.232603   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:56.259360   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:16:56.259388   73496 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:16:56.268694   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:56.321334   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:56.321364   73496 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:16:56.387409   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:57.010417   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010441   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010496   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010522   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010748   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.010795   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010804   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010812   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010818   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010817   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010830   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010838   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010843   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.011143   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011147   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011205   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011221   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.011209   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011298   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074127   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.074148   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.074476   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.074543   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074508   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.135875   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.135898   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136259   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136280   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136278   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136291   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.136308   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136703   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136747   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136757   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136767   73496 addons.go:475] Verifying addon metrics-server=true in "no-preload-459223"
	I1105 19:16:57.138699   73496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:16:56.066834   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:56.067140   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:57.140755   73496 addons.go:510] duration metric: took 1.243699533s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:16:58.154376   73496 pod_ready.go:103] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:17:00.149838   73496 pod_ready.go:93] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:00.149864   73496 pod_ready.go:82] duration metric: took 4.006514005s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:00.149876   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156460   73496 pod_ready.go:93] pod "kube-apiserver-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.156486   73496 pod_ready.go:82] duration metric: took 1.006602068s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156499   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160598   73496 pod_ready.go:93] pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.160618   73496 pod_ready.go:82] duration metric: took 4.110322ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160631   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164461   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.164482   73496 pod_ready.go:82] duration metric: took 3.842329ms for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164492   73496 pod_ready.go:39] duration metric: took 5.026393011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:17:01.164509   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:17:01.164566   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:17:01.183307   73496 api_server.go:72] duration metric: took 5.286331754s to wait for apiserver process to appear ...
	I1105 19:17:01.183338   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:17:01.183357   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:17:01.189083   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:17:01.190439   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:17:01.190469   73496 api_server.go:131] duration metric: took 7.123058ms to wait for apiserver health ...
	I1105 19:17:01.190479   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:17:01.198820   73496 system_pods.go:59] 9 kube-system pods found
	I1105 19:17:01.198854   73496 system_pods.go:61] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198862   73496 system_pods.go:61] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198869   73496 system_pods.go:61] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.198873   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.198879   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.198883   73496 system_pods.go:61] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.198887   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.198893   73496 system_pods.go:61] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.198896   73496 system_pods.go:61] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.198903   73496 system_pods.go:74] duration metric: took 8.418414ms to wait for pod list to return data ...
	I1105 19:17:01.198913   73496 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:17:01.202229   73496 default_sa.go:45] found service account: "default"
	I1105 19:17:01.202251   73496 default_sa.go:55] duration metric: took 3.332652ms for default service account to be created ...
	I1105 19:17:01.202260   73496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:17:01.208774   73496 system_pods.go:86] 9 kube-system pods found
	I1105 19:17:01.208803   73496 system_pods.go:89] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208811   73496 system_pods.go:89] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208817   73496 system_pods.go:89] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.208821   73496 system_pods.go:89] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.208825   73496 system_pods.go:89] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.208828   73496 system_pods.go:89] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.208833   73496 system_pods.go:89] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.208838   73496 system_pods.go:89] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.208842   73496 system_pods.go:89] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.208848   73496 system_pods.go:126] duration metric: took 6.584071ms to wait for k8s-apps to be running ...
	I1105 19:17:01.208856   73496 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:17:01.208898   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:01.225005   73496 system_svc.go:56] duration metric: took 16.138051ms WaitForService to wait for kubelet
	I1105 19:17:01.225038   73496 kubeadm.go:582] duration metric: took 5.328067688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:17:01.225062   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:17:01.347771   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:17:01.347799   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:17:01.347813   73496 node_conditions.go:105] duration metric: took 122.746343ms to run NodePressure ...
	I1105 19:17:01.347826   73496 start.go:241] waiting for startup goroutines ...
	I1105 19:17:01.347834   73496 start.go:246] waiting for cluster config update ...
	I1105 19:17:01.347846   73496 start.go:255] writing updated cluster config ...
	I1105 19:17:01.348126   73496 ssh_runner.go:195] Run: rm -f paused
	I1105 19:17:01.396396   73496 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:17:01.398528   73496 out.go:177] * Done! kubectl is now configured to use "no-preload-459223" cluster and "default" namespace by default
	I1105 19:17:36.069129   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:17:36.069396   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:17:36.069426   74485 kubeadm.go:310] 
	I1105 19:17:36.069489   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:17:36.069572   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:17:36.069591   74485 kubeadm.go:310] 
	I1105 19:17:36.069638   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:17:36.069699   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:17:36.069843   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:17:36.069852   74485 kubeadm.go:310] 
	I1105 19:17:36.069967   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:17:36.070017   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:17:36.070067   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:17:36.070074   74485 kubeadm.go:310] 
	I1105 19:17:36.070216   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:17:36.070328   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:17:36.070345   74485 kubeadm.go:310] 
	I1105 19:17:36.070486   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:17:36.070622   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:17:36.070690   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:17:36.070758   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:17:36.070767   74485 kubeadm.go:310] 
	I1105 19:17:36.071471   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:17:36.071558   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:17:36.071652   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1105 19:17:36.071791   74485 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1105 19:17:36.071838   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:17:36.527864   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:36.543211   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:17:36.552656   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:17:36.552676   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:17:36.552734   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:17:36.562296   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:17:36.562360   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:17:36.571759   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:17:36.580534   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:17:36.580586   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:17:36.590320   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.599165   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:17:36.599235   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.608340   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:17:36.616935   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:17:36.616986   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:17:36.625948   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:17:36.843267   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:19:32.770686   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:19:32.770828   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1105 19:19:32.772504   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:19:32.772564   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:19:32.772656   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:19:32.772784   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:19:32.772893   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:19:32.772971   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:19:32.774648   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:19:32.774726   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:19:32.774804   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:19:32.774902   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:19:32.775012   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:19:32.775144   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:19:32.775223   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:19:32.775307   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:19:32.775397   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:19:32.775487   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:19:32.775597   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:19:32.775651   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:19:32.775728   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:19:32.775796   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:19:32.775864   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:19:32.775961   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:19:32.776041   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:19:32.776175   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:19:32.776281   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:19:32.776330   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:19:32.776417   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:19:32.777837   74485 out.go:235]   - Booting up control plane ...
	I1105 19:19:32.777940   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:19:32.778032   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:19:32.778134   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:19:32.778248   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:19:32.778489   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:19:32.778563   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:19:32.778652   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.778960   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779080   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779302   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779399   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779663   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779766   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779990   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780051   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.780241   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780260   74485 kubeadm.go:310] 
	I1105 19:19:32.780325   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:19:32.780381   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:19:32.780391   74485 kubeadm.go:310] 
	I1105 19:19:32.780438   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:19:32.780486   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:19:32.780627   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:19:32.780639   74485 kubeadm.go:310] 
	I1105 19:19:32.780748   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:19:32.780790   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:19:32.780819   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:19:32.780825   74485 kubeadm.go:310] 
	I1105 19:19:32.780961   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:19:32.781048   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:19:32.781055   74485 kubeadm.go:310] 
	I1105 19:19:32.781144   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:19:32.781225   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:19:32.781293   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:19:32.781394   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:19:32.781475   74485 kubeadm.go:394] duration metric: took 8m1.792270232s to StartCluster
	I1105 19:19:32.781485   74485 kubeadm.go:310] 
	I1105 19:19:32.781522   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:19:32.781589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:19:32.825435   74485 cri.go:89] found id: ""
	I1105 19:19:32.825465   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.825475   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:19:32.825482   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:19:32.825543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:19:32.859245   74485 cri.go:89] found id: ""
	I1105 19:19:32.859275   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.859286   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:19:32.859293   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:19:32.859355   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:19:32.890801   74485 cri.go:89] found id: ""
	I1105 19:19:32.890833   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.890844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:19:32.890851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:19:32.890919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:19:32.925244   74485 cri.go:89] found id: ""
	I1105 19:19:32.925273   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.925280   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:19:32.925287   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:19:32.925352   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:19:32.959091   74485 cri.go:89] found id: ""
	I1105 19:19:32.959118   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.959129   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:19:32.959137   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:19:32.959191   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:19:32.990230   74485 cri.go:89] found id: ""
	I1105 19:19:32.990264   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.990276   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:19:32.990284   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:19:32.990343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:19:33.027461   74485 cri.go:89] found id: ""
	I1105 19:19:33.027494   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.027505   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:19:33.027512   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:19:33.027574   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:19:33.070819   74485 cri.go:89] found id: ""
	I1105 19:19:33.070847   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.070858   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:19:33.070869   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:19:33.070883   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:19:33.122580   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:19:33.122615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:19:33.136015   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:19:33.136043   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:19:33.213727   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:19:33.213750   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:19:33.213762   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:19:33.324287   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:19:33.324333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1105 19:19:33.384732   74485 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1105 19:19:33.384785   74485 out.go:270] * 
	W1105 19:19:33.384844   74485 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.384857   74485 out.go:270] * 
	W1105 19:19:33.385632   74485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:19:33.388860   74485 out.go:201] 
	W1105 19:19:33.390328   74485 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.390366   74485 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1105 19:19:33.390393   74485 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1105 19:19:33.391785   74485 out.go:201] 
	
	
	==> CRI-O <==
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.545668333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834918545647439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e280e97-e2ed-4b52-9a92-5910b92e8132 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.546095011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d07e51e-268a-4d21-a9c9-eb13e7210ff8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.546187919Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d07e51e-268a-4d21-a9c9-eb13e7210ff8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.546221597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4d07e51e-268a-4d21-a9c9-eb13e7210ff8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.579439290Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ad3c82f-1b93-47af-a01c-dbebcbdccd9f name=/runtime.v1.RuntimeService/Version
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.579535766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ad3c82f-1b93-47af-a01c-dbebcbdccd9f name=/runtime.v1.RuntimeService/Version
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.580548309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6628276a-ed60-49ea-800b-5e1a26e59ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.580990225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834918580960309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6628276a-ed60-49ea-800b-5e1a26e59ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.581653028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41a7747a-8a5b-489b-a825-68b38214b29b name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.581701633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41a7747a-8a5b-489b-a825-68b38214b29b name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.581737757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=41a7747a-8a5b-489b-a825-68b38214b29b name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.610776158Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d1cd8f8-133a-451a-8c6c-283be62d3e48 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.610846290Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d1cd8f8-133a-451a-8c6c-283be62d3e48 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.611858906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92c3fc83-73eb-4374-a440-578f319f13d7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.612313315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834918612288467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92c3fc83-73eb-4374-a440-578f319f13d7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.612830024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6496f1d7-8f6a-4f5f-a6ee-b78c0c2e01ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.612877354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6496f1d7-8f6a-4f5f-a6ee-b78c0c2e01ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.612905904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6496f1d7-8f6a-4f5f-a6ee-b78c0c2e01ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.647366056Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b89c04a-31dc-4306-9586-2b5e0563bfae name=/runtime.v1.RuntimeService/Version
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.647460875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b89c04a-31dc-4306-9586-2b5e0563bfae name=/runtime.v1.RuntimeService/Version
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.649022538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ec5d1b3-c668-4778-9efb-ebf5ca1ea950 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.649649298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730834918649607018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ec5d1b3-c668-4778-9efb-ebf5ca1ea950 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.650280451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfae0450-4d88-435e-8c17-c295f8c369a5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.650355558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfae0450-4d88-435e-8c17-c295f8c369a5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:28:38 old-k8s-version-567666 crio[622]: time="2024-11-05 19:28:38.650407805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cfae0450-4d88-435e-8c17-c295f8c369a5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 5 19:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055631] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039673] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.010642] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.961684] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543338] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.991220] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +0.059812] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.048972] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.214500] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.145320] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.257311] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +6.641170] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[  +0.060122] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.800603] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[ +13.119531] kauditd_printk_skb: 46 callbacks suppressed
	[Nov 5 19:15] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Nov 5 19:17] systemd-fstab-generator[5393]: Ignoring "noauto" option for root device
	[  +0.071837] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:28:38 up 17 min,  0 users,  load average: 0.04, 0.05, 0.01
	Linux old-k8s-version-567666 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006286f0)
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a5def0, 0x4f0ac20, 0xc000205f40, 0x1, 0xc00009e0c0)
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000264380, 0xc00009e0c0)
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008ffca0, 0xc0008dcae0)
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Nov 05 19:28:33 old-k8s-version-567666 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 05 19:28:33 old-k8s-version-567666 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 05 19:28:33 old-k8s-version-567666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Nov 05 19:28:33 old-k8s-version-567666 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 05 19:28:33 old-k8s-version-567666 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6560]: I1105 19:28:33.875401    6560 server.go:416] Version: v1.20.0
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6560]: I1105 19:28:33.875745    6560 server.go:837] Client rotation is on, will bootstrap in background
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6560]: I1105 19:28:33.877607    6560 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6560]: I1105 19:28:33.878870    6560 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Nov 05 19:28:33 old-k8s-version-567666 kubelet[6560]: W1105 19:28:33.879013    6560 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 2 (224.395674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-567666" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (489.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-11-05 19:32:51.879201814 +0000 UTC m=+6705.551926505
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-608095 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-608095 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.445µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-608095 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-608095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-608095 logs -n 25: (1.087039599s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-608095  | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC | 05 Nov 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-459223                  | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-271881                 | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-567666        | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-608095       | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:15 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-567666             | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:30 UTC | 05 Nov 24 19:30 UTC |
	| start   | -p newest-cni-886087 --memory=2200 --alsologtostderr   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:30 UTC | 05 Nov 24 19:31 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC | 05 Nov 24 19:31 UTC |
	| addons  | enable metrics-server -p newest-cni-886087             | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC | 05 Nov 24 19:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-886087                                   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC | 05 Nov 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-886087                  | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC | 05 Nov 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-886087 --memory=2200 --alsologtostderr   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC | 05 Nov 24 19:32 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:32 UTC | 05 Nov 24 19:32 UTC |
	| image   | newest-cni-886087 image list                           | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:32 UTC | 05 Nov 24 19:32 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-886087                                   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:32 UTC | 05 Nov 24 19:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-886087                                   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:32 UTC | 05 Nov 24 19:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-886087                                   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:32 UTC | 05 Nov 24 19:32 UTC |
	| delete  | -p newest-cni-886087                                   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:32 UTC | 05 Nov 24 19:32 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 19:31:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 19:31:38.149094   81659 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:31:38.149230   81659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:31:38.149242   81659 out.go:358] Setting ErrFile to fd 2...
	I1105 19:31:38.149249   81659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:31:38.149499   81659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:31:38.150088   81659 out.go:352] Setting JSON to false
	I1105 19:31:38.151075   81659 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8040,"bootTime":1730827058,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:31:38.151138   81659 start.go:139] virtualization: kvm guest
	I1105 19:31:38.153466   81659 out.go:177] * [newest-cni-886087] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:31:38.154910   81659 notify.go:220] Checking for updates...
	I1105 19:31:38.154982   81659 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:31:38.156395   81659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:31:38.157631   81659 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:31:38.158830   81659 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:31:38.160143   81659 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:31:38.161615   81659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:31:38.163401   81659 config.go:182] Loaded profile config "newest-cni-886087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:31:38.163777   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:31:38.163814   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:31:38.178806   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
	I1105 19:31:38.179249   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:31:38.179856   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:31:38.179894   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:31:38.180202   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:31:38.180371   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:38.180613   81659 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:31:38.180883   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:31:38.180920   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:31:38.195462   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1105 19:31:38.195897   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:31:38.196442   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:31:38.196467   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:31:38.196759   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:31:38.196935   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:38.233122   81659 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 19:31:38.234530   81659 start.go:297] selected driver: kvm2
	I1105 19:31:38.234548   81659 start.go:901] validating driver "kvm2" against &{Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:31:38.234645   81659 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:31:38.235342   81659 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:31:38.235425   81659 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:31:38.250731   81659 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:31:38.251188   81659 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1105 19:31:38.251220   81659 cni.go:84] Creating CNI manager for ""
	I1105 19:31:38.251273   81659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:31:38.251320   81659 start.go:340] cluster config:
	{Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-886087 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:31:38.251454   81659 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:31:38.253274   81659 out.go:177] * Starting "newest-cni-886087" primary control-plane node in "newest-cni-886087" cluster
	I1105 19:31:38.254589   81659 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:31:38.254623   81659 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 19:31:38.254636   81659 cache.go:56] Caching tarball of preloaded images
	I1105 19:31:38.254743   81659 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:31:38.254758   81659 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 19:31:38.254870   81659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/config.json ...
	I1105 19:31:38.255125   81659 start.go:360] acquireMachinesLock for newest-cni-886087: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:31:38.255175   81659 start.go:364] duration metric: took 28.402µs to acquireMachinesLock for "newest-cni-886087"
	I1105 19:31:38.255193   81659 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:31:38.255202   81659 fix.go:54] fixHost starting: 
	I1105 19:31:38.255507   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:31:38.255544   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:31:38.270186   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I1105 19:31:38.270563   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:31:38.271031   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:31:38.271051   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:31:38.271423   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:31:38.271623   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:38.271764   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:31:38.273175   81659 fix.go:112] recreateIfNeeded on newest-cni-886087: state=Stopped err=<nil>
	I1105 19:31:38.273217   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	W1105 19:31:38.273373   81659 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:31:38.275760   81659 out.go:177] * Restarting existing kvm2 VM for "newest-cni-886087" ...
	I1105 19:31:38.276821   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Start
	I1105 19:31:38.276982   81659 main.go:141] libmachine: (newest-cni-886087) Ensuring networks are active...
	I1105 19:31:38.277848   81659 main.go:141] libmachine: (newest-cni-886087) Ensuring network default is active
	I1105 19:31:38.278134   81659 main.go:141] libmachine: (newest-cni-886087) Ensuring network mk-newest-cni-886087 is active
	I1105 19:31:38.278429   81659 main.go:141] libmachine: (newest-cni-886087) Getting domain xml...
	I1105 19:31:38.279078   81659 main.go:141] libmachine: (newest-cni-886087) Creating domain...
	I1105 19:31:39.502276   81659 main.go:141] libmachine: (newest-cni-886087) Waiting to get IP...
	I1105 19:31:39.503090   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:39.503443   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:39.503547   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:39.503428   81709 retry.go:31] will retry after 250.164469ms: waiting for machine to come up
	I1105 19:31:39.754791   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:39.755478   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:39.755509   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:39.755442   81709 retry.go:31] will retry after 375.555481ms: waiting for machine to come up
	I1105 19:31:40.132932   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:40.133416   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:40.133450   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:40.133341   81709 retry.go:31] will retry after 400.386653ms: waiting for machine to come up
	I1105 19:31:40.535017   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:40.535517   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:40.535544   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:40.535458   81709 retry.go:31] will retry after 390.748801ms: waiting for machine to come up
	I1105 19:31:40.928002   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:40.928522   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:40.928553   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:40.928472   81709 retry.go:31] will retry after 587.673187ms: waiting for machine to come up
	I1105 19:31:41.518371   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:41.519006   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:41.519038   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:41.518925   81709 retry.go:31] will retry after 675.665704ms: waiting for machine to come up
	I1105 19:31:42.195867   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:42.196379   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:42.196403   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:42.196340   81709 retry.go:31] will retry after 1.084942101s: waiting for machine to come up
	I1105 19:31:43.283142   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:43.283596   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:43.283627   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:43.283550   81709 retry.go:31] will retry after 1.257040395s: waiting for machine to come up
	I1105 19:31:44.541752   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:44.542140   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:44.542164   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:44.542114   81709 retry.go:31] will retry after 1.313530392s: waiting for machine to come up
	I1105 19:31:45.857551   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:45.857975   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:45.857996   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:45.857938   81709 retry.go:31] will retry after 1.973444875s: waiting for machine to come up
	I1105 19:31:47.833857   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:47.834322   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:47.834352   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:47.834258   81709 retry.go:31] will retry after 2.471561461s: waiting for machine to come up
	I1105 19:31:50.308495   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:50.308947   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:50.308965   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:50.308904   81709 retry.go:31] will retry after 2.274664056s: waiting for machine to come up
	I1105 19:31:52.585705   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:52.586075   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:52.586103   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:52.586020   81709 retry.go:31] will retry after 2.999577394s: waiting for machine to come up
	I1105 19:31:55.588143   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.588593   81659 main.go:141] libmachine: (newest-cni-886087) Found IP for machine: 192.168.61.217
	I1105 19:31:55.588617   81659 main.go:141] libmachine: (newest-cni-886087) Reserving static IP address...
	I1105 19:31:55.588631   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has current primary IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.588972   81659 main.go:141] libmachine: (newest-cni-886087) Reserved static IP address: 192.168.61.217
	I1105 19:31:55.588997   81659 main.go:141] libmachine: (newest-cni-886087) Waiting for SSH to be available...
	I1105 19:31:55.589015   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "newest-cni-886087", mac: "52:54:00:c0:46:5f", ip: "192.168.61.217"} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.589052   81659 main.go:141] libmachine: (newest-cni-886087) DBG | skip adding static IP to network mk-newest-cni-886087 - found existing host DHCP lease matching {name: "newest-cni-886087", mac: "52:54:00:c0:46:5f", ip: "192.168.61.217"}
	I1105 19:31:55.589068   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Getting to WaitForSSH function...
	I1105 19:31:55.590945   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.591268   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.591293   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.591469   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Using SSH client type: external
	I1105 19:31:55.591498   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa (-rw-------)
	I1105 19:31:55.591530   81659 main.go:141] libmachine: (newest-cni-886087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:31:55.591547   81659 main.go:141] libmachine: (newest-cni-886087) DBG | About to run SSH command:
	I1105 19:31:55.591558   81659 main.go:141] libmachine: (newest-cni-886087) DBG | exit 0
	I1105 19:31:55.714962   81659 main.go:141] libmachine: (newest-cni-886087) DBG | SSH cmd err, output: <nil>: 
	I1105 19:31:55.715453   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetConfigRaw
	I1105 19:31:55.716059   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:55.718740   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.719123   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.719162   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.719398   81659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/config.json ...
	I1105 19:31:55.719615   81659 machine.go:93] provisionDockerMachine start ...
	I1105 19:31:55.719634   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:55.719845   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:55.722185   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.722574   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.722603   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.722789   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:55.722928   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.723121   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.723266   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:55.723443   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:55.723629   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:55.723643   81659 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:31:55.831124   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:31:55.831161   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:31:55.831415   81659 buildroot.go:166] provisioning hostname "newest-cni-886087"
	I1105 19:31:55.831447   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:31:55.831613   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:55.834426   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.834811   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.834840   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.835048   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:55.835206   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.835334   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.835443   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:55.835568   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:55.835761   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:55.835776   81659 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-886087 && echo "newest-cni-886087" | sudo tee /etc/hostname
	I1105 19:31:55.957442   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-886087
	
	I1105 19:31:55.957473   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:55.960162   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.960489   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.960522   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.960703   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:55.960897   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.961071   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.961214   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:55.961354   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:55.961558   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:55.961574   81659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-886087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-886087/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-886087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:31:56.083790   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:31:56.083821   81659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:31:56.083878   81659 buildroot.go:174] setting up certificates
	I1105 19:31:56.083893   81659 provision.go:84] configureAuth start
	I1105 19:31:56.083910   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:31:56.084187   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:56.087133   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.087571   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.087600   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.087781   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.090575   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.090997   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.091029   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.091307   81659 provision.go:143] copyHostCerts
	I1105 19:31:56.091353   81659 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:31:56.091369   81659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:31:56.091450   81659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:31:56.091622   81659 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:31:56.091631   81659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:31:56.091670   81659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:31:56.091775   81659 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:31:56.091787   81659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:31:56.091823   81659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:31:56.091920   81659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.newest-cni-886087 san=[127.0.0.1 192.168.61.217 localhost minikube newest-cni-886087]
	I1105 19:31:56.189913   81659 provision.go:177] copyRemoteCerts
	I1105 19:31:56.189971   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:31:56.189997   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.192299   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.192633   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.192661   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.192808   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.192972   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.193099   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.193249   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:56.276612   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:31:56.303079   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1105 19:31:56.328222   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:31:56.351014   81659 provision.go:87] duration metric: took 267.105822ms to configureAuth
	I1105 19:31:56.351043   81659 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:31:56.351254   81659 config.go:182] Loaded profile config "newest-cni-886087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:31:56.351332   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.353926   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.354250   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.354302   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.354487   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.354681   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.354833   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.355017   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.355203   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:56.355384   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:56.355404   81659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:31:56.595986   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:31:56.596012   81659 machine.go:96] duration metric: took 876.384014ms to provisionDockerMachine
	I1105 19:31:56.596026   81659 start.go:293] postStartSetup for "newest-cni-886087" (driver="kvm2")
	I1105 19:31:56.596039   81659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:31:56.596076   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.596362   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:31:56.596390   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.599199   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.599611   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.599652   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.599760   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.599976   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.600145   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.600318   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:56.681700   81659 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:31:56.685772   81659 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:31:56.685796   81659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:31:56.685868   81659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:31:56.685963   81659 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:31:56.686093   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:31:56.695473   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:31:56.718655   81659 start.go:296] duration metric: took 122.585679ms for postStartSetup
	I1105 19:31:56.718703   81659 fix.go:56] duration metric: took 18.463500183s for fixHost
	I1105 19:31:56.718728   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.721350   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.721682   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.721712   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.721837   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.722043   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.722236   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.722381   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.722539   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:56.722733   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:56.722745   81659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:31:56.831466   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730835116.808398865
	
	I1105 19:31:56.831490   81659 fix.go:216] guest clock: 1730835116.808398865
	I1105 19:31:56.831500   81659 fix.go:229] Guest: 2024-11-05 19:31:56.808398865 +0000 UTC Remote: 2024-11-05 19:31:56.718708499 +0000 UTC m=+18.609118399 (delta=89.690366ms)
	I1105 19:31:56.831525   81659 fix.go:200] guest clock delta is within tolerance: 89.690366ms
	I1105 19:31:56.831540   81659 start.go:83] releasing machines lock for "newest-cni-886087", held for 18.576353925s
	I1105 19:31:56.831566   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.831859   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:56.834811   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.835197   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.835224   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.835392   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.835835   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.836031   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.836148   81659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:31:56.836194   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.836252   81659 ssh_runner.go:195] Run: cat /version.json
	I1105 19:31:56.836276   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.839031   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.839059   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.839413   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.839443   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.839464   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.839479   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.839594   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.839750   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.839752   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.839892   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.839929   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.840037   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.840033   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:56.840183   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:56.947428   81659 ssh_runner.go:195] Run: systemctl --version
	I1105 19:31:56.953300   81659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:31:57.092022   81659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:31:57.098268   81659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:31:57.098354   81659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:31:57.113198   81659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:31:57.113230   81659 start.go:495] detecting cgroup driver to use...
	I1105 19:31:57.113289   81659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:31:57.129183   81659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:31:57.143140   81659 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:31:57.143222   81659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:31:57.156946   81659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:31:57.170736   81659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:31:57.284528   81659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:31:57.405685   81659 docker.go:233] disabling docker service ...
	I1105 19:31:57.405763   81659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:31:57.419599   81659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:31:57.431787   81659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:31:57.555457   81659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:31:57.665868   81659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:31:57.679880   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:31:57.697840   81659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:31:57.697921   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.708065   81659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:31:57.708125   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.717851   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.728177   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.737821   81659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:31:57.748692   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.758609   81659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.774502   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.784623   81659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:31:57.793246   81659 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:31:57.793310   81659 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:31:57.806454   81659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:31:57.815058   81659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:31:57.934064   81659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:31:58.022623   81659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:31:58.022717   81659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:31:58.027195   81659 start.go:563] Will wait 60s for crictl version
	I1105 19:31:58.027257   81659 ssh_runner.go:195] Run: which crictl
	I1105 19:31:58.030808   81659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:31:58.069408   81659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:31:58.069499   81659 ssh_runner.go:195] Run: crio --version
	I1105 19:31:58.096795   81659 ssh_runner.go:195] Run: crio --version
	I1105 19:31:58.126804   81659 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:31:58.127932   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:58.130556   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:58.130925   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:58.130948   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:58.131156   81659 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:31:58.134860   81659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:31:58.148115   81659 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1105 19:31:58.149491   81659 kubeadm.go:883] updating cluster {Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:31:58.149616   81659 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:31:58.149693   81659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:31:58.185973   81659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:31:58.186068   81659 ssh_runner.go:195] Run: which lz4
	I1105 19:31:58.190114   81659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:31:58.194132   81659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:31:58.194167   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:31:59.422646   81659 crio.go:462] duration metric: took 1.232559867s to copy over tarball
	I1105 19:31:59.422741   81659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:32:01.473465   81659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.050691247s)
	I1105 19:32:01.473515   81659 crio.go:469] duration metric: took 2.050829011s to extract the tarball
	I1105 19:32:01.473526   81659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:32:01.510917   81659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:32:01.562006   81659 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:32:01.562030   81659 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:32:01.562037   81659 kubeadm.go:934] updating node { 192.168.61.217 8443 v1.31.2 crio true true} ...
	I1105 19:32:01.562131   81659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-886087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:32:01.562200   81659 ssh_runner.go:195] Run: crio config
	I1105 19:32:01.609362   81659 cni.go:84] Creating CNI manager for ""
	I1105 19:32:01.609389   81659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:32:01.609403   81659 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1105 19:32:01.609433   81659 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.217 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-886087 NodeName:newest-cni-886087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:32:01.609614   81659 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-886087"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:32:01.609688   81659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:32:01.620778   81659 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:32:01.620844   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:32:01.631260   81659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1105 19:32:01.648389   81659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:32:01.665644   81659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1105 19:32:01.683737   81659 ssh_runner.go:195] Run: grep 192.168.61.217	control-plane.minikube.internal$ /etc/hosts
	I1105 19:32:01.687493   81659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:32:01.699954   81659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:32:01.821990   81659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:32:01.838401   81659 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087 for IP: 192.168.61.217
	I1105 19:32:01.838423   81659 certs.go:194] generating shared ca certs ...
	I1105 19:32:01.838438   81659 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:32:01.838590   81659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:32:01.838636   81659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:32:01.838646   81659 certs.go:256] generating profile certs ...
	I1105 19:32:01.838748   81659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/client.key
	I1105 19:32:01.838824   81659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key.141acc84
	I1105 19:32:01.838884   81659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.key
	I1105 19:32:01.839118   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:32:01.839201   81659 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:32:01.839215   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:32:01.839265   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:32:01.839305   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:32:01.839345   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:32:01.839407   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:32:01.840276   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:32:01.875771   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:32:01.918265   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:32:01.953634   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:32:01.983013   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 19:32:02.017669   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:32:02.040380   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:32:02.064023   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:32:02.088155   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:32:02.111233   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:32:02.133841   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:32:02.156611   81659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:32:02.174743   81659 ssh_runner.go:195] Run: openssl version
	I1105 19:32:02.180453   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:32:02.190928   81659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:32:02.195412   81659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:32:02.195464   81659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:32:02.201478   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:32:02.212090   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:32:02.222067   81659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:32:02.226513   81659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:32:02.226562   81659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:32:02.232169   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:32:02.244620   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:32:02.254640   81659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:32:02.259104   81659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:32:02.259161   81659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:32:02.264644   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:32:02.274489   81659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:32:02.278709   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:32:02.284325   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:32:02.289774   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:32:02.295667   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:32:02.301401   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:32:02.307157   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:32:02.313025   81659 kubeadm.go:392] StartCluster: {Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:32:02.313140   81659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:32:02.313188   81659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:32:02.351517   81659 cri.go:89] found id: ""
	I1105 19:32:02.351604   81659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:32:02.362427   81659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:32:02.362446   81659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:32:02.362484   81659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:32:02.371723   81659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:32:02.372612   81659 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-886087" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:32:02.373245   81659 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-886087" cluster setting kubeconfig missing "newest-cni-886087" context setting]
	I1105 19:32:02.374107   81659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:32:02.375681   81659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:32:02.384794   81659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.217
	I1105 19:32:02.384825   81659 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:32:02.384837   81659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:32:02.384891   81659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:32:02.422959   81659 cri.go:89] found id: ""
	I1105 19:32:02.423041   81659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:32:02.438194   81659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:32:02.447370   81659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:32:02.447389   81659 kubeadm.go:157] found existing configuration files:
	
	I1105 19:32:02.447429   81659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:32:02.456079   81659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:32:02.456141   81659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:32:02.464835   81659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:32:02.478309   81659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:32:02.478368   81659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:32:02.487402   81659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:32:02.496227   81659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:32:02.496299   81659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:32:02.504851   81659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:32:02.513659   81659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:32:02.513727   81659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:32:02.522638   81659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:32:02.531504   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:02.646467   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:03.820869   81659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.174370721s)
	I1105 19:32:03.820923   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:04.010895   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:04.075901   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:04.190578   81659 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:32:04.190666   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:04.691404   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:05.190958   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:05.691675   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:06.191211   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:06.219642   81659 api_server.go:72] duration metric: took 2.029059403s to wait for apiserver process to appear ...
	I1105 19:32:06.219675   81659 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:32:06.219697   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:09.161077   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:32:09.161116   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:32:09.161133   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:09.239308   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:32:09.239364   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:32:09.239383   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:09.243976   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:32:09.244011   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:32:09.720150   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:09.726085   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:32:09.726121   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:32:10.220364   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:10.225165   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:32:10.225197   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:32:10.719905   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:10.724404   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 200:
	ok
	I1105 19:32:10.733201   81659 api_server.go:141] control plane version: v1.31.2
	I1105 19:32:10.733240   81659 api_server.go:131] duration metric: took 4.513557287s to wait for apiserver health ...
	I1105 19:32:10.733253   81659 cni.go:84] Creating CNI manager for ""
	I1105 19:32:10.733262   81659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:32:10.735931   81659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:32:10.737016   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:32:10.747735   81659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:32:10.765670   81659 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:32:10.786066   81659 system_pods.go:59] 8 kube-system pods found
	I1105 19:32:10.786123   81659 system_pods.go:61] "coredns-7c65d6cfc9-hccg9" [b5bbe8a2-e713-4521-9afb-59f262be9b77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:32:10.786136   81659 system_pods.go:61] "etcd-newest-cni-886087" [dba15c90-352f-4011-b36b-ebf22cf417c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:32:10.786149   81659 system_pods.go:61] "kube-apiserver-newest-cni-886087" [1a98004c-5d8a-4c9c-9dab-bd5f73ff1bb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:32:10.786158   81659 system_pods.go:61] "kube-controller-manager-newest-cni-886087" [792b0691-de62-4426-8b2f-af02dbbd5295] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:32:10.786172   81659 system_pods.go:61] "kube-proxy-pdcz9" [e61fb8e1-e5a0-4e43-a2e3-98ee59eea944] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 19:32:10.786182   81659 system_pods.go:61] "kube-scheduler-newest-cni-886087" [e4716c76-34c7-4e05-a818-5900f52d2143] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:32:10.786193   81659 system_pods.go:61] "metrics-server-6867b74b74-p7hsm" [abe65954-245a-488c-8392-0ae4c215110f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:32:10.786204   81659 system_pods.go:61] "storage-provisioner" [d738844c-6b07-46de-858e-a9c746ec926e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 19:32:10.786214   81659 system_pods.go:74] duration metric: took 20.515749ms to wait for pod list to return data ...
	I1105 19:32:10.786227   81659 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:32:10.798955   81659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:32:10.799009   81659 node_conditions.go:123] node cpu capacity is 2
	I1105 19:32:10.799027   81659 node_conditions.go:105] duration metric: took 12.791563ms to run NodePressure ...
	I1105 19:32:10.799052   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:11.070886   81659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:32:11.081971   81659 ops.go:34] apiserver oom_adj: -16
	I1105 19:32:11.081996   81659 kubeadm.go:597] duration metric: took 8.719543223s to restartPrimaryControlPlane
	I1105 19:32:11.082008   81659 kubeadm.go:394] duration metric: took 8.768988651s to StartCluster
	I1105 19:32:11.082029   81659 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:32:11.082116   81659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:32:11.084055   81659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:32:11.084351   81659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:32:11.084467   81659 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:32:11.084598   81659 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-886087"
	I1105 19:32:11.084600   81659 config.go:182] Loaded profile config "newest-cni-886087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:32:11.084616   81659 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-886087"
	I1105 19:32:11.084619   81659 addons.go:69] Setting default-storageclass=true in profile "newest-cni-886087"
	I1105 19:32:11.084637   81659 addons.go:69] Setting metrics-server=true in profile "newest-cni-886087"
	I1105 19:32:11.084651   81659 addons.go:234] Setting addon metrics-server=true in "newest-cni-886087"
	I1105 19:32:11.084652   81659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-886087"
	W1105 19:32:11.084658   81659 addons.go:243] addon metrics-server should already be in state true
	I1105 19:32:11.084658   81659 addons.go:69] Setting dashboard=true in profile "newest-cni-886087"
	I1105 19:32:11.084673   81659 addons.go:234] Setting addon dashboard=true in "newest-cni-886087"
	W1105 19:32:11.084687   81659 addons.go:243] addon dashboard should already be in state true
	I1105 19:32:11.084714   81659 host.go:66] Checking if "newest-cni-886087" exists ...
	I1105 19:32:11.084688   81659 host.go:66] Checking if "newest-cni-886087" exists ...
	W1105 19:32:11.084628   81659 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:32:11.084934   81659 host.go:66] Checking if "newest-cni-886087" exists ...
	I1105 19:32:11.085110   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.085139   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.085141   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.085153   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.085179   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.085303   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.085330   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.085366   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.086028   81659 out.go:177] * Verifying Kubernetes components...
	I1105 19:32:11.087435   81659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:32:11.102509   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I1105 19:32:11.103009   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.103610   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.103659   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.104310   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.104813   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I1105 19:32:11.104826   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I1105 19:32:11.104884   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.104959   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.105164   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.105269   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.105584   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.105607   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.105761   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.105790   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.106178   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.106260   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.106812   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.106861   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.106952   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.107023   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.108275   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40801
	I1105 19:32:11.108821   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.109349   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.109360   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.109644   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.109825   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:32:11.113368   81659 addons.go:234] Setting addon default-storageclass=true in "newest-cni-886087"
	W1105 19:32:11.113390   81659 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:32:11.113416   81659 host.go:66] Checking if "newest-cni-886087" exists ...
	I1105 19:32:11.113778   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.113847   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.123012   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33877
	I1105 19:32:11.123142   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43003
	I1105 19:32:11.143731   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.144508   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.144717   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.144732   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.145292   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.145312   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.146219   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.146396   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.146456   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:32:11.147267   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:32:11.148938   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:32:11.149134   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:32:11.151282   81659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:32:11.151429   81659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:32:11.153158   81659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:32:11.153184   81659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:32:11.153211   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:32:11.153250   81659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:32:11.153274   81659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:32:11.153292   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:32:11.157729   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.157971   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.158046   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:32:11.158074   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.158259   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:32:11.158466   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:32:11.158596   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:32:11.158842   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.158908   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:32:11.158937   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:32:11.159111   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:32:11.159129   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:32:11.159249   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:32:11.159333   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:32:11.164510   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46881
	I1105 19:32:11.164902   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.165495   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.165521   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.165860   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.166039   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:32:11.166201   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I1105 19:32:11.166663   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.167188   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.167205   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.167578   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.167731   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:32:11.168138   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.168171   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.170255   81659 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1105 19:32:11.175613   81659 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1105 19:32:11.176824   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1105 19:32:11.176845   81659 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1105 19:32:11.176867   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:32:11.180182   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.180619   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:32:11.180657   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.180896   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:32:11.181064   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:32:11.181228   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:32:11.181371   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:32:11.187103   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44895
	I1105 19:32:11.187577   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.188108   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.188132   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.188471   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.188665   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:32:11.190356   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:32:11.190850   81659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:32:11.190878   81659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:32:11.190898   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:32:11.194115   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.194707   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:32:11.194734   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.195357   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:32:11.195571   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:32:11.195988   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:32:11.196185   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:32:11.296824   81659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:32:11.317375   81659 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:32:11.317457   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:11.336254   81659 api_server.go:72] duration metric: took 251.868593ms to wait for apiserver process to appear ...
	I1105 19:32:11.336286   81659 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:32:11.336308   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:11.345722   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 200:
	ok
	I1105 19:32:11.347956   81659 api_server.go:141] control plane version: v1.31.2
	I1105 19:32:11.347980   81659 api_server.go:131] duration metric: took 11.686486ms to wait for apiserver health ...
	I1105 19:32:11.347990   81659 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:32:11.361348   81659 system_pods.go:59] 8 kube-system pods found
	I1105 19:32:11.361384   81659 system_pods.go:61] "coredns-7c65d6cfc9-hccg9" [b5bbe8a2-e713-4521-9afb-59f262be9b77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:32:11.361395   81659 system_pods.go:61] "etcd-newest-cni-886087" [dba15c90-352f-4011-b36b-ebf22cf417c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:32:11.361405   81659 system_pods.go:61] "kube-apiserver-newest-cni-886087" [1a98004c-5d8a-4c9c-9dab-bd5f73ff1bb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:32:11.361419   81659 system_pods.go:61] "kube-controller-manager-newest-cni-886087" [792b0691-de62-4426-8b2f-af02dbbd5295] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:32:11.361426   81659 system_pods.go:61] "kube-proxy-pdcz9" [e61fb8e1-e5a0-4e43-a2e3-98ee59eea944] Running
	I1105 19:32:11.361434   81659 system_pods.go:61] "kube-scheduler-newest-cni-886087" [e4716c76-34c7-4e05-a818-5900f52d2143] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:32:11.361443   81659 system_pods.go:61] "metrics-server-6867b74b74-p7hsm" [abe65954-245a-488c-8392-0ae4c215110f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:32:11.361449   81659 system_pods.go:61] "storage-provisioner" [d738844c-6b07-46de-858e-a9c746ec926e] Running
	I1105 19:32:11.361461   81659 system_pods.go:74] duration metric: took 13.464096ms to wait for pod list to return data ...
	I1105 19:32:11.361472   81659 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:32:11.371703   81659 default_sa.go:45] found service account: "default"
	I1105 19:32:11.371727   81659 default_sa.go:55] duration metric: took 10.239414ms for default service account to be created ...
	I1105 19:32:11.371739   81659 kubeadm.go:582] duration metric: took 287.36013ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1105 19:32:11.371751   81659 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:32:11.376624   81659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:32:11.376658   81659 node_conditions.go:123] node cpu capacity is 2
	I1105 19:32:11.376675   81659 node_conditions.go:105] duration metric: took 4.917172ms to run NodePressure ...
	I1105 19:32:11.376690   81659 start.go:241] waiting for startup goroutines ...
	I1105 19:32:11.380315   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1105 19:32:11.380346   81659 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1105 19:32:11.398793   81659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:32:11.423264   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1105 19:32:11.423298   81659 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1105 19:32:11.439099   81659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:32:11.439121   81659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:32:11.451431   81659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:32:11.509520   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1105 19:32:11.509551   81659 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1105 19:32:11.528198   81659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:32:11.528228   81659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:32:11.557529   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1105 19:32:11.557553   81659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1105 19:32:11.602080   81659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:32:11.602111   81659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:32:11.626047   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1105 19:32:11.626076   81659 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1105 19:32:11.660638   81659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:32:11.683111   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1105 19:32:11.683135   81659 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1105 19:32:11.767931   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1105 19:32:11.767957   81659 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1105 19:32:11.843625   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1105 19:32:11.843657   81659 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1105 19:32:11.867146   81659 main.go:141] libmachine: Making call to close driver server
	I1105 19:32:11.867172   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Close
	I1105 19:32:11.867474   81659 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:32:11.867493   81659 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:32:11.867504   81659 main.go:141] libmachine: Making call to close driver server
	I1105 19:32:11.867512   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Close
	I1105 19:32:11.868061   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Closing plugin on server side
	I1105 19:32:11.868123   81659 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:32:11.868252   81659 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:32:11.877237   81659 main.go:141] libmachine: Making call to close driver server
	I1105 19:32:11.877257   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Close
	I1105 19:32:11.877508   81659 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:32:11.877528   81659 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:32:11.891673   81659 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1105 19:32:11.891700   81659 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1105 19:32:11.928661   81659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1105 19:32:13.335466   81659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.883995667s)
	I1105 19:32:13.335535   81659 main.go:141] libmachine: Making call to close driver server
	I1105 19:32:13.335550   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Close
	I1105 19:32:13.335978   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Closing plugin on server side
	I1105 19:32:13.335995   81659 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:32:13.336012   81659 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:32:13.336026   81659 main.go:141] libmachine: Making call to close driver server
	I1105 19:32:13.336038   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Close
	I1105 19:32:13.336263   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Closing plugin on server side
	I1105 19:32:13.336297   81659 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:32:13.336308   81659 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:32:13.442349   81659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.781654944s)
	I1105 19:32:13.442436   81659 main.go:141] libmachine: Making call to close driver server
	I1105 19:32:13.442453   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Close
	I1105 19:32:13.442733   81659 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:32:13.442752   81659 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:32:13.442762   81659 main.go:141] libmachine: Making call to close driver server
	I1105 19:32:13.442770   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Close
	I1105 19:32:13.443167   81659 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:32:13.443185   81659 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:32:13.443197   81659 addons.go:475] Verifying addon metrics-server=true in "newest-cni-886087"
	I1105 19:32:13.707348   81659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.778643318s)
	I1105 19:32:13.707387   81659 main.go:141] libmachine: Making call to close driver server
	I1105 19:32:13.707400   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Close
	I1105 19:32:13.707712   81659 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:32:13.707729   81659 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:32:13.707738   81659 main.go:141] libmachine: Making call to close driver server
	I1105 19:32:13.707745   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Close
	I1105 19:32:13.708080   81659 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:32:13.708084   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Closing plugin on server side
	I1105 19:32:13.708093   81659 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:32:13.709822   81659 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-886087 addons enable metrics-server
	
	I1105 19:32:13.711338   81659 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1105 19:32:13.712791   81659 addons.go:510] duration metric: took 2.628333959s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1105 19:32:13.712832   81659 start.go:246] waiting for cluster config update ...
	I1105 19:32:13.712848   81659 start.go:255] writing updated cluster config ...
	I1105 19:32:13.713092   81659 ssh_runner.go:195] Run: rm -f paused
	I1105 19:32:13.765991   81659 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:32:13.767569   81659 out.go:177] * Done! kubectl is now configured to use "newest-cni-886087" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.431194365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce0ed588-cdf5-46fc-850f-1cae044a303b name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.432306721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e638b5f1-e291-42ed-ad1d-5c3ceb63020f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.432710506Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835172432690219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e638b5f1-e291-42ed-ad1d-5c3ceb63020f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.433172145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3ab9aa0-2843-4fdd-abca-fab7cb97f177 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.433240307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3ab9aa0-2843-4fdd-abca-fab7cb97f177 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.433448815Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730833907317693608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f787ef550160cd97dd2407c47c75addf578d4904b03bfd41c5f802269baf23ce,PodSandboxId:da2dde857316d292ca9c103724dddd3d7db0d986385c2d838c513c127c5231e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730833887735884278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60cb45e2-148c-4641-8049-e602f75d631a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20,PodSandboxId:342009a1adef6608ff0764229692100d6b63bab6cdbf878d7e5960c66ce04890,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833884147657996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cdvml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b47fc10-0352-47df-aef2-46083091a840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730833876519682773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb,PodSandboxId:73f7d5a507be5fd1340a643fcd8265c9cdc6f2f590cb0435aedc57b7d250d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833876509802109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8v42c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 007c81ba-8ec7-4cdf-87a0
-17c9225a3aa0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9,PodSandboxId:36b1d0b1e93250b6af3e0e40f6f7b66f58428d6fe88d721e7e79ab57ce6eee94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833872477428273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd3f32dd8d97118149a2
df2f0aadf30,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2,PodSandboxId:71131a920b634bab593fe6e55a037a4518cfe14cba8b14db50888b8f99b35cd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833872478857514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8f14005173a948ad352e15e16d6b07a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e,PodSandboxId:a2394d68180b1447632bab3f0d18374c880a2cbe9705563d9f55c631a8c69ea6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833872465021873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12ce6f2d55a174f91207a80726b4
a106,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a,PodSandboxId:d3bb63c9509f73c748efd96d6a565d7000921c4888f9a777c6208ba2301d42bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833872457852974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9abf6fcb365f20056cd3e9b47141e2
d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3ab9aa0-2843-4fdd-abca-fab7cb97f177 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.468315152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9052d3ec-a547-4838-9a65-340757772564 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.468397943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9052d3ec-a547-4838-9a65-340757772564 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.469512794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ac87e0b-7ec7-4483-a51c-f07401cd8e08 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.470362735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835172470331399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ac87e0b-7ec7-4483-a51c-f07401cd8e08 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.470894475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4403e1df-781a-4941-8188-fc0158af9199 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.470984755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4403e1df-781a-4941-8188-fc0158af9199 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.471173318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730833907317693608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f787ef550160cd97dd2407c47c75addf578d4904b03bfd41c5f802269baf23ce,PodSandboxId:da2dde857316d292ca9c103724dddd3d7db0d986385c2d838c513c127c5231e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730833887735884278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60cb45e2-148c-4641-8049-e602f75d631a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20,PodSandboxId:342009a1adef6608ff0764229692100d6b63bab6cdbf878d7e5960c66ce04890,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833884147657996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cdvml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b47fc10-0352-47df-aef2-46083091a840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730833876519682773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb,PodSandboxId:73f7d5a507be5fd1340a643fcd8265c9cdc6f2f590cb0435aedc57b7d250d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833876509802109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8v42c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 007c81ba-8ec7-4cdf-87a0
-17c9225a3aa0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9,PodSandboxId:36b1d0b1e93250b6af3e0e40f6f7b66f58428d6fe88d721e7e79ab57ce6eee94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833872477428273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd3f32dd8d97118149a2
df2f0aadf30,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2,PodSandboxId:71131a920b634bab593fe6e55a037a4518cfe14cba8b14db50888b8f99b35cd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833872478857514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8f14005173a948ad352e15e16d6b07a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e,PodSandboxId:a2394d68180b1447632bab3f0d18374c880a2cbe9705563d9f55c631a8c69ea6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833872465021873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12ce6f2d55a174f91207a80726b4
a106,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a,PodSandboxId:d3bb63c9509f73c748efd96d6a565d7000921c4888f9a777c6208ba2301d42bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833872457852974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9abf6fcb365f20056cd3e9b47141e2
d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4403e1df-781a-4941-8188-fc0158af9199 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.501383110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb6892d0-3999-4674-8a21-776d5c6c6bf5 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.501456578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb6892d0-3999-4674-8a21-776d5c6c6bf5 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.502583426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7b742ad-79ef-4bf5-87c7-e66b1eb7ec9a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.503059969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835172503035435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7b742ad-79ef-4bf5-87c7-e66b1eb7ec9a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.503454484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6e12d0f-9971-400a-9cf7-8b71aafb8b26 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.503515601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6e12d0f-9971-400a-9cf7-8b71aafb8b26 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.503755218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730833907317693608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f787ef550160cd97dd2407c47c75addf578d4904b03bfd41c5f802269baf23ce,PodSandboxId:da2dde857316d292ca9c103724dddd3d7db0d986385c2d838c513c127c5231e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730833887735884278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60cb45e2-148c-4641-8049-e602f75d631a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20,PodSandboxId:342009a1adef6608ff0764229692100d6b63bab6cdbf878d7e5960c66ce04890,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833884147657996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cdvml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b47fc10-0352-47df-aef2-46083091a840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730833876519682773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb,PodSandboxId:73f7d5a507be5fd1340a643fcd8265c9cdc6f2f590cb0435aedc57b7d250d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833876509802109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8v42c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 007c81ba-8ec7-4cdf-87a0
-17c9225a3aa0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9,PodSandboxId:36b1d0b1e93250b6af3e0e40f6f7b66f58428d6fe88d721e7e79ab57ce6eee94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833872477428273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd3f32dd8d97118149a2
df2f0aadf30,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2,PodSandboxId:71131a920b634bab593fe6e55a037a4518cfe14cba8b14db50888b8f99b35cd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833872478857514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8f14005173a948ad352e15e16d6b07a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e,PodSandboxId:a2394d68180b1447632bab3f0d18374c880a2cbe9705563d9f55c631a8c69ea6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833872465021873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12ce6f2d55a174f91207a80726b4
a106,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a,PodSandboxId:d3bb63c9509f73c748efd96d6a565d7000921c4888f9a777c6208ba2301d42bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833872457852974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9abf6fcb365f20056cd3e9b47141e2
d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6e12d0f-9971-400a-9cf7-8b71aafb8b26 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.533977451Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ee0ce354-db02-4a94-8b93-037407cbf46d name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.534350575Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:da2dde857316d292ca9c103724dddd3d7db0d986385c2d838c513c127c5231e4,Metadata:&PodSandboxMetadata{Name:busybox,Uid:60cb45e2-148c-4641-8049-e602f75d631a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730833884223873623,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60cb45e2-148c-4641-8049-e602f75d631a,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05T19:11:16.095453503Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:342009a1adef6608ff0764229692100d6b63bab6cdbf878d7e5960c66ce04890,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-cdvml,Uid:0b47fc10-0352-47df-aef2-46083091a840,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:173083
3883919659124,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-cdvml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b47fc10-0352-47df-aef2-46083091a840,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05T19:11:16.095456925Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cebb1c72ef7b00a4aac977637b17f7e3018f519ccaee27e48ecd2a21f7562148,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-44mcg,Uid:1af2bd4e-49d9-4126-9192-7d2697e2a601,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730833883127950447,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-44mcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af2bd4e-49d9-4126-9192-7d2697e2a601,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05
T19:11:16.095451025Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73f7d5a507be5fd1340a643fcd8265c9cdc6f2f590cb0435aedc57b7d250d707,Metadata:&PodSandboxMetadata{Name:kube-proxy-8v42c,Uid:007c81ba-8ec7-4cdf-87a0-17c9225a3aa0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730833876414030573,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8v42c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 007c81ba-8ec7-4cdf-87a0-17c9225a3aa0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05T19:11:16.095455994Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:df6efb9a-59ec-4296-baa4-91bbac895315,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730833876407292013,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-11-05T19:11:16.095452297Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3bb63c9509f73c748efd96d6a565d7000921c4888f9a777c6208ba2301d42bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-608095,Uid:9abf6fcb365f20056cd3e9b47141e2d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730833871562873395,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9abf6fcb365f20056cd3e9b47141e2d9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9abf6fcb365f20056cd3e9b47141e2d9,kubernetes.io/config.seen: 2024-11-05T19:11:11.079837766Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:71131a920b634bab593fe6e55a037a4518cfe14cba8b14db50888b8f99b35cd9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-60
8095,Uid:8f14005173a948ad352e15e16d6b07a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730833871551007726,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f14005173a948ad352e15e16d6b07a0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8f14005173a948ad352e15e16d6b07a0,kubernetes.io/config.seen: 2024-11-05T19:11:11.079836955Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36b1d0b1e93250b6af3e0e40f6f7b66f58428d6fe88d721e7e79ab57ce6eee94,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-608095,Uid:3dd3f32dd8d97118149a2df2f0aadf30,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730833871548346620,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-def
ault-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd3f32dd8d97118149a2df2f0aadf30,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.10:8444,kubernetes.io/config.hash: 3dd3f32dd8d97118149a2df2f0aadf30,kubernetes.io/config.seen: 2024-11-05T19:11:11.079835715Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a2394d68180b1447632bab3f0d18374c880a2cbe9705563d9f55c631a8c69ea6,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-608095,Uid:12ce6f2d55a174f91207a80726b4a106,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730833871547009578,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12ce6f2d55a174f91207a80726b4a106,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clien
t-urls: https://192.168.50.10:2379,kubernetes.io/config.hash: 12ce6f2d55a174f91207a80726b4a106,kubernetes.io/config.seen: 2024-11-05T19:11:11.079832294Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ee0ce354-db02-4a94-8b93-037407cbf46d name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.535306151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7dcf816-c9dc-463f-a0bd-fdee2bc86c11 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.535387176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7dcf816-c9dc-463f-a0bd-fdee2bc86c11 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:52 default-k8s-diff-port-608095 crio[707]: time="2024-11-05 19:32:52.535769324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730833907317693608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f787ef550160cd97dd2407c47c75addf578d4904b03bfd41c5f802269baf23ce,PodSandboxId:da2dde857316d292ca9c103724dddd3d7db0d986385c2d838c513c127c5231e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730833887735884278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60cb45e2-148c-4641-8049-e602f75d631a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20,PodSandboxId:342009a1adef6608ff0764229692100d6b63bab6cdbf878d7e5960c66ce04890,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730833884147657996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cdvml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b47fc10-0352-47df-aef2-46083091a840,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976,PodSandboxId:3adea7b8362a4b52573dc96aeada27d53a27ec3313d649b311bff90c733700c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730833876519682773,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: df6efb9a-59ec-4296-baa4-91bbac895315,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb,PodSandboxId:73f7d5a507be5fd1340a643fcd8265c9cdc6f2f590cb0435aedc57b7d250d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730833876509802109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8v42c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 007c81ba-8ec7-4cdf-87a0
-17c9225a3aa0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9,PodSandboxId:36b1d0b1e93250b6af3e0e40f6f7b66f58428d6fe88d721e7e79ab57ce6eee94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730833872477428273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd3f32dd8d97118149a2
df2f0aadf30,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2,PodSandboxId:71131a920b634bab593fe6e55a037a4518cfe14cba8b14db50888b8f99b35cd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730833872478857514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8f14005173a948ad352e15e16d6b07a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e,PodSandboxId:a2394d68180b1447632bab3f0d18374c880a2cbe9705563d9f55c631a8c69ea6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730833872465021873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12ce6f2d55a174f91207a80726b4
a106,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a,PodSandboxId:d3bb63c9509f73c748efd96d6a565d7000921c4888f9a777c6208ba2301d42bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730833872457852974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-608095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9abf6fcb365f20056cd3e9b47141e2
d9,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7dcf816-c9dc-463f-a0bd-fdee2bc86c11 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	44080c0e289a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   3adea7b8362a4       storage-provisioner
	f787ef550160c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   da2dde857316d       busybox
	531bb8d98703d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   342009a1adef6       coredns-7c65d6cfc9-cdvml
	6039942d4d993       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   3adea7b8362a4       storage-provisioner
	e8180f551c559       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      21 minutes ago      Running             kube-proxy                1                   73f7d5a507be5       kube-proxy-8v42c
	4a77037302cd0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      21 minutes ago      Running             kube-controller-manager   1                   71131a920b634       kube-controller-manager-default-k8s-diff-port-608095
	a8de930573a64       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      21 minutes ago      Running             kube-apiserver            1                   36b1d0b1e9325       kube-apiserver-default-k8s-diff-port-608095
	e6393e5b4069d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   a2394d68180b1       etcd-default-k8s-diff-port-608095
	6bf66f706c934       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      21 minutes ago      Running             kube-scheduler            1                   d3bb63c9509f7       kube-scheduler-default-k8s-diff-port-608095
	
	
	==> coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38179 - 22427 "HINFO IN 2591781970772243088.4480814410341590386. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009728045s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-608095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-608095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=default-k8s-diff-port-608095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T19_03_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 19:03:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-608095
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 19:32:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 19:32:09 +0000   Tue, 05 Nov 2024 19:03:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 19:32:09 +0000   Tue, 05 Nov 2024 19:03:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 19:32:09 +0000   Tue, 05 Nov 2024 19:03:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 19:32:09 +0000   Tue, 05 Nov 2024 19:11:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.10
	  Hostname:    default-k8s-diff-port-608095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e79da7c0acd44febfe2af835f76cda4
	  System UUID:                1e79da7c-0acd-44fe-bfe2-af835f76cda4
	  Boot ID:                    b61422b5-93e7-47ec-a4bc-d57993931982
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-cdvml                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-608095                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-608095             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-608095    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-8v42c                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-608095             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-44mcg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-608095 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-608095 event: Registered Node default-k8s-diff-port-608095 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-608095 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-608095 event: Registered Node default-k8s-diff-port-608095 in Controller
	
	
	==> dmesg <==
	[Nov 5 19:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057037] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046641] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920435] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.899616] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.351819] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov 5 19:11] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.056219] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066421] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.189159] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.134552] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.297918] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[  +4.042158] systemd-fstab-generator[789]: Ignoring "noauto" option for root device
	[  +1.983351] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +0.059896] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.589380] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.819767] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +3.870543] kauditd_printk_skb: 64 callbacks suppressed
	[ +24.202161] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] <==
	{"level":"info","ts":"2024-11-05T19:11:14.818164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T19:11:14.818202Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-11-05T19:11:31.446327Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.956271ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5140739118161463722 > lease_revoke:<id:475792fdb5de1a93>","response":"size:28"}
	{"level":"warn","ts":"2024-11-05T19:11:31.598385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.941287ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5140739118161463723 > lease_revoke:<id:475792fdb5de1a57>","response":"size:28"}
	{"level":"info","ts":"2024-11-05T19:11:31.598519Z","caller":"traceutil/trace.go:171","msg":"trace[682513589] linearizableReadLoop","detail":"{readStateIndex:692; appliedIndex:690; }","duration":"539.487941ms","start":"2024-11-05T19:11:31.059016Z","end":"2024-11-05T19:11:31.598504Z","steps":["trace[682513589] 'read index received'  (duration: 153.756839ms)","trace[682513589] 'applied index is now lower than readState.Index'  (duration: 385.730299ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T19:11:31.598850Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"539.819562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-11-05T19:11:31.599240Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.420273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-44mcg\" ","response":"range_response_count:1 size:4394"}
	{"level":"info","ts":"2024-11-05T19:11:31.599296Z","caller":"traceutil/trace.go:171","msg":"trace[1330847336] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-44mcg; range_end:; response_count:1; response_revision:650; }","duration":"387.475771ms","start":"2024-11-05T19:11:31.211805Z","end":"2024-11-05T19:11:31.599280Z","steps":["trace[1330847336] 'agreement among raft nodes before linearized reading'  (duration: 387.338302ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T19:11:31.599328Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T19:11:31.211749Z","time spent":"387.571043ms","remote":"127.0.0.1:38242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4417,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-44mcg\" "}
	{"level":"info","ts":"2024-11-05T19:11:31.599253Z","caller":"traceutil/trace.go:171","msg":"trace[317850113] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:650; }","duration":"540.243525ms","start":"2024-11-05T19:11:31.058997Z","end":"2024-11-05T19:11:31.599241Z","steps":["trace[317850113] 'agreement among raft nodes before linearized reading'  (duration: 539.767144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T19:11:31.599499Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T19:11:31.058955Z","time spent":"540.517834ms","remote":"127.0.0.1:38024","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-11-05T19:11:52.623219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.131604ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5140739118161463910 > lease_revoke:<id:475792fdbcf809db>","response":"size:28"}
	{"level":"info","ts":"2024-11-05T19:21:14.846527Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":893}
	{"level":"info","ts":"2024-11-05T19:21:14.862565Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":893,"took":"15.4884ms","hash":2176153668,"current-db-size-bytes":2727936,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2727936,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-11-05T19:21:14.862677Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2176153668,"revision":893,"compact-revision":-1}
	{"level":"info","ts":"2024-11-05T19:26:14.858952Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1135}
	{"level":"info","ts":"2024-11-05T19:26:14.862794Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1135,"took":"3.54624ms","hash":2713344028,"current-db-size-bytes":2727936,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-11-05T19:26:14.862834Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2713344028,"revision":1135,"compact-revision":893}
	{"level":"info","ts":"2024-11-05T19:31:11.619036Z","caller":"traceutil/trace.go:171","msg":"trace[1694948175] transaction","detail":"{read_only:false; response_revision:1619; number_of_response:1; }","duration":"115.533913ms","start":"2024-11-05T19:31:11.503162Z","end":"2024-11-05T19:31:11.618696Z","steps":["trace[1694948175] 'process raft request'  (duration: 115.381656ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T19:31:11.881434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.038951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T19:31:11.881521Z","caller":"traceutil/trace.go:171","msg":"trace[1378242854] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1619; }","duration":"169.211446ms","start":"2024-11-05T19:31:11.712294Z","end":"2024-11-05T19:31:11.881505Z","steps":["trace[1378242854] 'range keys from in-memory index tree'  (duration: 168.928791ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T19:31:12.724301Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.041874ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5140739118161471554 > lease_revoke:<id:475792fdbcf827e5>","response":"size:28"}
	{"level":"info","ts":"2024-11-05T19:31:14.866513Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1379}
	{"level":"info","ts":"2024-11-05T19:31:14.870372Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1379,"took":"3.464651ms","hash":3968358663,"current-db-size-bytes":2727936,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-11-05T19:31:14.870552Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3968358663,"revision":1379,"compact-revision":1135}
	
	
	==> kernel <==
	 19:32:52 up 22 min,  0 users,  load average: 0.30, 0.20, 0.13
	Linux default-k8s-diff-port-608095 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] <==
	I1105 19:29:17.120770       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:29:17.120829       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:31:16.118073       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:31:16.118550       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1105 19:31:17.121040       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:31:17.121112       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1105 19:31:17.121154       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:31:17.121238       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:31:17.122394       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:31:17.122504       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:32:17.123700       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:32:17.123804       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1105 19:32:17.123871       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:32:17.123993       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:32:17.125118       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:32:17.125189       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] <==
	E1105 19:27:49.836148       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:27:50.320401       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:28:19.842697       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:28:20.327196       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:28:49.849043       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:28:50.334470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:29:19.856376       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:29:20.341511       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:29:49.862000       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:29:50.349607       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:30:19.867775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:30:20.358090       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:30:49.874500       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:30:50.367609       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:31:19.882325       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:31:20.377403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:31:49.888992       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:31:50.384039       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:32:09.319489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-608095"
	E1105 19:32:19.895417       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:32:20.390985       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:32:32.131375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="465.235µs"
	I1105 19:32:47.128303       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="101.563µs"
	E1105 19:32:49.901673       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:32:50.397580       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 19:11:16.695877       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 19:11:16.705281       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.10"]
	E1105 19:11:16.705458       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 19:11:16.732445       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 19:11:16.732504       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 19:11:16.732540       1 server_linux.go:169] "Using iptables Proxier"
	I1105 19:11:16.734792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 19:11:16.735713       1 server.go:483] "Version info" version="v1.31.2"
	I1105 19:11:16.735745       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:11:16.740088       1 config.go:199] "Starting service config controller"
	I1105 19:11:16.740163       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 19:11:16.740263       1 config.go:105] "Starting endpoint slice config controller"
	I1105 19:11:16.740310       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 19:11:16.740897       1 config.go:328] "Starting node config controller"
	I1105 19:11:16.742791       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 19:11:16.746026       1 shared_informer.go:320] Caches are synced for node config
	I1105 19:11:16.840813       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 19:11:16.840856       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] <==
	I1105 19:11:13.121832       1 serving.go:386] Generated self-signed cert in-memory
	W1105 19:11:16.054474       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1105 19:11:16.054685       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1105 19:11:16.054768       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1105 19:11:16.054797       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1105 19:11:16.115642       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1105 19:11:16.122998       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:11:16.125232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1105 19:11:16.125306       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 19:11:16.125382       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1105 19:11:16.125475       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1105 19:11:16.227225       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 19:31:51 default-k8s-diff-port-608095 kubelet[918]: E1105 19:31:51.420089     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835111419654195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:01 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:01.424450     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835121422534069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:01 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:01.424880     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835121422534069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:04 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:04.112852     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-44mcg" podUID="1af2bd4e-49d9-4126-9192-7d2697e2a601"
	Nov 05 19:32:11 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:11.142849     918 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 19:32:11 default-k8s-diff-port-608095 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 19:32:11 default-k8s-diff-port-608095 kubelet[918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 19:32:11 default-k8s-diff-port-608095 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 19:32:11 default-k8s-diff-port-608095 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 19:32:11 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:11.427800     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835131427083593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:11 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:11.427967     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835131427083593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:18 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:18.125203     918 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 05 19:32:18 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:18.125515     918 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 05 19:32:18 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:18.126284     918 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrrsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-44mcg_kube-system(1af2bd4e-49d9-4126-9192-7d2697e2a601): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Nov 05 19:32:18 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:18.127684     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-44mcg" podUID="1af2bd4e-49d9-4126-9192-7d2697e2a601"
	Nov 05 19:32:21 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:21.429782     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835141429143640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:21 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:21.430218     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835141429143640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:31 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:31.431975     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835151431338770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:31 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:31.432315     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835151431338770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:32 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:32.113423     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-44mcg" podUID="1af2bd4e-49d9-4126-9192-7d2697e2a601"
	Nov 05 19:32:41 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:41.434186     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835161433628351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:41 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:41.434227     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835161433628351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:47 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:47.115033     918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-44mcg" podUID="1af2bd4e-49d9-4126-9192-7d2697e2a601"
	Nov 05 19:32:51 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:51.435966     918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835171435535038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:51 default-k8s-diff-port-608095 kubelet[918]: E1105 19:32:51.435997     918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835171435535038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] <==
	I1105 19:11:47.408115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 19:11:47.418124       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 19:11:47.418210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 19:12:04.817333       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 19:12:04.818325       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-608095_752a1b62-485c-40c7-9644-380ce41ccb9d!
	I1105 19:12:04.818739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87ec9435-bf7d-4318-aa0b-da7b3dfced1b", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-608095_752a1b62-485c-40c7-9644-380ce41ccb9d became leader
	I1105 19:12:04.919567       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-608095_752a1b62-485c-40c7-9644-380ce41ccb9d!
	
	
	==> storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] <==
	I1105 19:11:16.621589       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1105 19:11:46.625190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-608095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-44mcg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-608095 describe pod metrics-server-6867b74b74-44mcg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-608095 describe pod metrics-server-6867b74b74-44mcg: exit status 1 (59.173048ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-44mcg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-608095 describe pod metrics-server-6867b74b74-44mcg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (489.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (425.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-271881 -n embed-certs-271881
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-11-05 19:32:12.04097495 +0000 UTC m=+6665.713699644
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-271881 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-271881 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.332µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-271881 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271881 -n embed-certs-271881
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-271881 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-271881 logs -n 25: (1.283457241s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-537175 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | disable-driver-mounts-537175                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:04 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-459223             | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-271881            | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-608095  | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC | 05 Nov 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-459223                  | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-271881                 | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-567666        | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-608095       | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:15 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-567666             | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:30 UTC | 05 Nov 24 19:30 UTC |
	| start   | -p newest-cni-886087 --memory=2200 --alsologtostderr   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:30 UTC | 05 Nov 24 19:31 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC | 05 Nov 24 19:31 UTC |
	| addons  | enable metrics-server -p newest-cni-886087             | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC | 05 Nov 24 19:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-886087                                   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC | 05 Nov 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-886087                  | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC | 05 Nov 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-886087 --memory=2200 --alsologtostderr   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:31 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 19:31:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 19:31:38.149094   81659 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:31:38.149230   81659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:31:38.149242   81659 out.go:358] Setting ErrFile to fd 2...
	I1105 19:31:38.149249   81659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:31:38.149499   81659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:31:38.150088   81659 out.go:352] Setting JSON to false
	I1105 19:31:38.151075   81659 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8040,"bootTime":1730827058,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:31:38.151138   81659 start.go:139] virtualization: kvm guest
	I1105 19:31:38.153466   81659 out.go:177] * [newest-cni-886087] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:31:38.154910   81659 notify.go:220] Checking for updates...
	I1105 19:31:38.154982   81659 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:31:38.156395   81659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:31:38.157631   81659 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:31:38.158830   81659 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:31:38.160143   81659 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:31:38.161615   81659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:31:38.163401   81659 config.go:182] Loaded profile config "newest-cni-886087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:31:38.163777   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:31:38.163814   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:31:38.178806   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
	I1105 19:31:38.179249   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:31:38.179856   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:31:38.179894   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:31:38.180202   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:31:38.180371   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:38.180613   81659 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:31:38.180883   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:31:38.180920   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:31:38.195462   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1105 19:31:38.195897   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:31:38.196442   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:31:38.196467   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:31:38.196759   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:31:38.196935   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:38.233122   81659 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 19:31:38.234530   81659 start.go:297] selected driver: kvm2
	I1105 19:31:38.234548   81659 start.go:901] validating driver "kvm2" against &{Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:31:38.234645   81659 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:31:38.235342   81659 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:31:38.235425   81659 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:31:38.250731   81659 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:31:38.251188   81659 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1105 19:31:38.251220   81659 cni.go:84] Creating CNI manager for ""
	I1105 19:31:38.251273   81659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:31:38.251320   81659 start.go:340] cluster config:
	{Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-886087 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:31:38.251454   81659 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:31:38.253274   81659 out.go:177] * Starting "newest-cni-886087" primary control-plane node in "newest-cni-886087" cluster
	I1105 19:31:38.254589   81659 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:31:38.254623   81659 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 19:31:38.254636   81659 cache.go:56] Caching tarball of preloaded images
	I1105 19:31:38.254743   81659 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:31:38.254758   81659 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 19:31:38.254870   81659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/config.json ...
	I1105 19:31:38.255125   81659 start.go:360] acquireMachinesLock for newest-cni-886087: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:31:38.255175   81659 start.go:364] duration metric: took 28.402µs to acquireMachinesLock for "newest-cni-886087"
	I1105 19:31:38.255193   81659 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:31:38.255202   81659 fix.go:54] fixHost starting: 
	I1105 19:31:38.255507   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:31:38.255544   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:31:38.270186   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I1105 19:31:38.270563   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:31:38.271031   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:31:38.271051   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:31:38.271423   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:31:38.271623   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:38.271764   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:31:38.273175   81659 fix.go:112] recreateIfNeeded on newest-cni-886087: state=Stopped err=<nil>
	I1105 19:31:38.273217   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	W1105 19:31:38.273373   81659 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:31:38.275760   81659 out.go:177] * Restarting existing kvm2 VM for "newest-cni-886087" ...
	I1105 19:31:38.276821   81659 main.go:141] libmachine: (newest-cni-886087) Calling .Start
	I1105 19:31:38.276982   81659 main.go:141] libmachine: (newest-cni-886087) Ensuring networks are active...
	I1105 19:31:38.277848   81659 main.go:141] libmachine: (newest-cni-886087) Ensuring network default is active
	I1105 19:31:38.278134   81659 main.go:141] libmachine: (newest-cni-886087) Ensuring network mk-newest-cni-886087 is active
	I1105 19:31:38.278429   81659 main.go:141] libmachine: (newest-cni-886087) Getting domain xml...
	I1105 19:31:38.279078   81659 main.go:141] libmachine: (newest-cni-886087) Creating domain...
	I1105 19:31:39.502276   81659 main.go:141] libmachine: (newest-cni-886087) Waiting to get IP...
	I1105 19:31:39.503090   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:39.503443   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:39.503547   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:39.503428   81709 retry.go:31] will retry after 250.164469ms: waiting for machine to come up
	I1105 19:31:39.754791   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:39.755478   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:39.755509   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:39.755442   81709 retry.go:31] will retry after 375.555481ms: waiting for machine to come up
	I1105 19:31:40.132932   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:40.133416   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:40.133450   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:40.133341   81709 retry.go:31] will retry after 400.386653ms: waiting for machine to come up
	I1105 19:31:40.535017   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:40.535517   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:40.535544   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:40.535458   81709 retry.go:31] will retry after 390.748801ms: waiting for machine to come up
	I1105 19:31:40.928002   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:40.928522   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:40.928553   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:40.928472   81709 retry.go:31] will retry after 587.673187ms: waiting for machine to come up
	I1105 19:31:41.518371   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:41.519006   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:41.519038   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:41.518925   81709 retry.go:31] will retry after 675.665704ms: waiting for machine to come up
	I1105 19:31:42.195867   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:42.196379   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:42.196403   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:42.196340   81709 retry.go:31] will retry after 1.084942101s: waiting for machine to come up
	I1105 19:31:43.283142   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:43.283596   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:43.283627   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:43.283550   81709 retry.go:31] will retry after 1.257040395s: waiting for machine to come up
	I1105 19:31:44.541752   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:44.542140   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:44.542164   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:44.542114   81709 retry.go:31] will retry after 1.313530392s: waiting for machine to come up
	I1105 19:31:45.857551   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:45.857975   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:45.857996   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:45.857938   81709 retry.go:31] will retry after 1.973444875s: waiting for machine to come up
	I1105 19:31:47.833857   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:47.834322   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:47.834352   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:47.834258   81709 retry.go:31] will retry after 2.471561461s: waiting for machine to come up
	I1105 19:31:50.308495   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:50.308947   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:50.308965   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:50.308904   81709 retry.go:31] will retry after 2.274664056s: waiting for machine to come up
	I1105 19:31:52.585705   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:52.586075   81659 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:31:52.586103   81659 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:31:52.586020   81709 retry.go:31] will retry after 2.999577394s: waiting for machine to come up
	I1105 19:31:55.588143   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.588593   81659 main.go:141] libmachine: (newest-cni-886087) Found IP for machine: 192.168.61.217
	I1105 19:31:55.588617   81659 main.go:141] libmachine: (newest-cni-886087) Reserving static IP address...
	I1105 19:31:55.588631   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has current primary IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.588972   81659 main.go:141] libmachine: (newest-cni-886087) Reserved static IP address: 192.168.61.217
	I1105 19:31:55.588997   81659 main.go:141] libmachine: (newest-cni-886087) Waiting for SSH to be available...
	I1105 19:31:55.589015   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "newest-cni-886087", mac: "52:54:00:c0:46:5f", ip: "192.168.61.217"} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.589052   81659 main.go:141] libmachine: (newest-cni-886087) DBG | skip adding static IP to network mk-newest-cni-886087 - found existing host DHCP lease matching {name: "newest-cni-886087", mac: "52:54:00:c0:46:5f", ip: "192.168.61.217"}
	I1105 19:31:55.589068   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Getting to WaitForSSH function...
	I1105 19:31:55.590945   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.591268   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.591293   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.591469   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Using SSH client type: external
	I1105 19:31:55.591498   81659 main.go:141] libmachine: (newest-cni-886087) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa (-rw-------)
	I1105 19:31:55.591530   81659 main.go:141] libmachine: (newest-cni-886087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:31:55.591547   81659 main.go:141] libmachine: (newest-cni-886087) DBG | About to run SSH command:
	I1105 19:31:55.591558   81659 main.go:141] libmachine: (newest-cni-886087) DBG | exit 0
	I1105 19:31:55.714962   81659 main.go:141] libmachine: (newest-cni-886087) DBG | SSH cmd err, output: <nil>: 
	I1105 19:31:55.715453   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetConfigRaw
	I1105 19:31:55.716059   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:55.718740   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.719123   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.719162   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.719398   81659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/config.json ...
	I1105 19:31:55.719615   81659 machine.go:93] provisionDockerMachine start ...
	I1105 19:31:55.719634   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:55.719845   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:55.722185   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.722574   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.722603   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.722789   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:55.722928   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.723121   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.723266   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:55.723443   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:55.723629   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:55.723643   81659 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:31:55.831124   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:31:55.831161   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:31:55.831415   81659 buildroot.go:166] provisioning hostname "newest-cni-886087"
	I1105 19:31:55.831447   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:31:55.831613   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:55.834426   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.834811   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.834840   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.835048   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:55.835206   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.835334   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.835443   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:55.835568   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:55.835761   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:55.835776   81659 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-886087 && echo "newest-cni-886087" | sudo tee /etc/hostname
	I1105 19:31:55.957442   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-886087
	
	I1105 19:31:55.957473   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:55.960162   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.960489   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:55.960522   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:55.960703   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:55.960897   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.961071   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:55.961214   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:55.961354   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:55.961558   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:55.961574   81659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-886087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-886087/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-886087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:31:56.083790   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:31:56.083821   81659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:31:56.083878   81659 buildroot.go:174] setting up certificates
	I1105 19:31:56.083893   81659 provision.go:84] configureAuth start
	I1105 19:31:56.083910   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:31:56.084187   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:56.087133   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.087571   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.087600   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.087781   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.090575   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.090997   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.091029   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.091307   81659 provision.go:143] copyHostCerts
	I1105 19:31:56.091353   81659 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:31:56.091369   81659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:31:56.091450   81659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:31:56.091622   81659 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:31:56.091631   81659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:31:56.091670   81659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:31:56.091775   81659 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:31:56.091787   81659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:31:56.091823   81659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:31:56.091920   81659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.newest-cni-886087 san=[127.0.0.1 192.168.61.217 localhost minikube newest-cni-886087]
	I1105 19:31:56.189913   81659 provision.go:177] copyRemoteCerts
	I1105 19:31:56.189971   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:31:56.189997   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.192299   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.192633   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.192661   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.192808   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.192972   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.193099   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.193249   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:56.276612   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:31:56.303079   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1105 19:31:56.328222   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:31:56.351014   81659 provision.go:87] duration metric: took 267.105822ms to configureAuth
	I1105 19:31:56.351043   81659 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:31:56.351254   81659 config.go:182] Loaded profile config "newest-cni-886087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:31:56.351332   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.353926   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.354250   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.354302   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.354487   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.354681   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.354833   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.355017   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.355203   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:56.355384   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:56.355404   81659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:31:56.595986   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:31:56.596012   81659 machine.go:96] duration metric: took 876.384014ms to provisionDockerMachine
	I1105 19:31:56.596026   81659 start.go:293] postStartSetup for "newest-cni-886087" (driver="kvm2")
	I1105 19:31:56.596039   81659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:31:56.596076   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.596362   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:31:56.596390   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.599199   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.599611   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.599652   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.599760   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.599976   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.600145   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.600318   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:56.681700   81659 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:31:56.685772   81659 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:31:56.685796   81659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:31:56.685868   81659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:31:56.685963   81659 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:31:56.686093   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:31:56.695473   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:31:56.718655   81659 start.go:296] duration metric: took 122.585679ms for postStartSetup
	I1105 19:31:56.718703   81659 fix.go:56] duration metric: took 18.463500183s for fixHost
	I1105 19:31:56.718728   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.721350   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.721682   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.721712   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.721837   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.722043   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.722236   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.722381   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.722539   81659 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:56.722733   81659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:56.722745   81659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:31:56.831466   81659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730835116.808398865
	
	I1105 19:31:56.831490   81659 fix.go:216] guest clock: 1730835116.808398865
	I1105 19:31:56.831500   81659 fix.go:229] Guest: 2024-11-05 19:31:56.808398865 +0000 UTC Remote: 2024-11-05 19:31:56.718708499 +0000 UTC m=+18.609118399 (delta=89.690366ms)
	I1105 19:31:56.831525   81659 fix.go:200] guest clock delta is within tolerance: 89.690366ms
	I1105 19:31:56.831540   81659 start.go:83] releasing machines lock for "newest-cni-886087", held for 18.576353925s
	I1105 19:31:56.831566   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.831859   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:56.834811   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.835197   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.835224   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.835392   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.835835   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.836031   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:56.836148   81659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:31:56.836194   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.836252   81659 ssh_runner.go:195] Run: cat /version.json
	I1105 19:31:56.836276   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:56.839031   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.839059   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.839413   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.839443   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:56.839464   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.839479   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:56.839594   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.839750   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:56.839752   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.839892   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:56.839929   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.840037   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:56.840033   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:56.840183   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:56.947428   81659 ssh_runner.go:195] Run: systemctl --version
	I1105 19:31:56.953300   81659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:31:57.092022   81659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:31:57.098268   81659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:31:57.098354   81659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:31:57.113198   81659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:31:57.113230   81659 start.go:495] detecting cgroup driver to use...
	I1105 19:31:57.113289   81659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:31:57.129183   81659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:31:57.143140   81659 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:31:57.143222   81659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:31:57.156946   81659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:31:57.170736   81659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:31:57.284528   81659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:31:57.405685   81659 docker.go:233] disabling docker service ...
	I1105 19:31:57.405763   81659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:31:57.419599   81659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:31:57.431787   81659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:31:57.555457   81659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:31:57.665868   81659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:31:57.679880   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:31:57.697840   81659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:31:57.697921   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.708065   81659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:31:57.708125   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.717851   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.728177   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.737821   81659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:31:57.748692   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.758609   81659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.774502   81659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:57.784623   81659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:31:57.793246   81659 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:31:57.793310   81659 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:31:57.806454   81659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:31:57.815058   81659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:31:57.934064   81659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:31:58.022623   81659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:31:58.022717   81659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:31:58.027195   81659 start.go:563] Will wait 60s for crictl version
	I1105 19:31:58.027257   81659 ssh_runner.go:195] Run: which crictl
	I1105 19:31:58.030808   81659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:31:58.069408   81659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:31:58.069499   81659 ssh_runner.go:195] Run: crio --version
	I1105 19:31:58.096795   81659 ssh_runner.go:195] Run: crio --version
	I1105 19:31:58.126804   81659 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:31:58.127932   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:58.130556   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:58.130925   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:58.130948   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:58.131156   81659 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:31:58.134860   81659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:31:58.148115   81659 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1105 19:31:58.149491   81659 kubeadm.go:883] updating cluster {Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:31:58.149616   81659 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:31:58.149693   81659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:31:58.185973   81659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:31:58.186068   81659 ssh_runner.go:195] Run: which lz4
	I1105 19:31:58.190114   81659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:31:58.194132   81659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:31:58.194167   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:31:59.422646   81659 crio.go:462] duration metric: took 1.232559867s to copy over tarball
	I1105 19:31:59.422741   81659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:32:01.473465   81659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.050691247s)
	I1105 19:32:01.473515   81659 crio.go:469] duration metric: took 2.050829011s to extract the tarball
	I1105 19:32:01.473526   81659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:32:01.510917   81659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:32:01.562006   81659 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:32:01.562030   81659 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:32:01.562037   81659 kubeadm.go:934] updating node { 192.168.61.217 8443 v1.31.2 crio true true} ...
	I1105 19:32:01.562131   81659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-886087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:32:01.562200   81659 ssh_runner.go:195] Run: crio config
	I1105 19:32:01.609362   81659 cni.go:84] Creating CNI manager for ""
	I1105 19:32:01.609389   81659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:32:01.609403   81659 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1105 19:32:01.609433   81659 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.217 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-886087 NodeName:newest-cni-886087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:32:01.609614   81659 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-886087"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:32:01.609688   81659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:32:01.620778   81659 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:32:01.620844   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:32:01.631260   81659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1105 19:32:01.648389   81659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:32:01.665644   81659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1105 19:32:01.683737   81659 ssh_runner.go:195] Run: grep 192.168.61.217	control-plane.minikube.internal$ /etc/hosts
	I1105 19:32:01.687493   81659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:32:01.699954   81659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:32:01.821990   81659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:32:01.838401   81659 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087 for IP: 192.168.61.217
	I1105 19:32:01.838423   81659 certs.go:194] generating shared ca certs ...
	I1105 19:32:01.838438   81659 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:32:01.838590   81659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:32:01.838636   81659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:32:01.838646   81659 certs.go:256] generating profile certs ...
	I1105 19:32:01.838748   81659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/client.key
	I1105 19:32:01.838824   81659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key.141acc84
	I1105 19:32:01.838884   81659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.key
	I1105 19:32:01.839118   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:32:01.839201   81659 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:32:01.839215   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:32:01.839265   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:32:01.839305   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:32:01.839345   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:32:01.839407   81659 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:32:01.840276   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:32:01.875771   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:32:01.918265   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:32:01.953634   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:32:01.983013   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 19:32:02.017669   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:32:02.040380   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:32:02.064023   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:32:02.088155   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:32:02.111233   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:32:02.133841   81659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:32:02.156611   81659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:32:02.174743   81659 ssh_runner.go:195] Run: openssl version
	I1105 19:32:02.180453   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:32:02.190928   81659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:32:02.195412   81659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:32:02.195464   81659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:32:02.201478   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:32:02.212090   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:32:02.222067   81659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:32:02.226513   81659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:32:02.226562   81659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:32:02.232169   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:32:02.244620   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:32:02.254640   81659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:32:02.259104   81659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:32:02.259161   81659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:32:02.264644   81659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:32:02.274489   81659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:32:02.278709   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:32:02.284325   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:32:02.289774   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:32:02.295667   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:32:02.301401   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:32:02.307157   81659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:32:02.313025   81659 kubeadm.go:392] StartCluster: {Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:32:02.313140   81659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:32:02.313188   81659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:32:02.351517   81659 cri.go:89] found id: ""
	I1105 19:32:02.351604   81659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:32:02.362427   81659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:32:02.362446   81659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:32:02.362484   81659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:32:02.371723   81659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:32:02.372612   81659 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-886087" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:32:02.373245   81659 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-886087" cluster setting kubeconfig missing "newest-cni-886087" context setting]
	I1105 19:32:02.374107   81659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:32:02.375681   81659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:32:02.384794   81659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.217
	I1105 19:32:02.384825   81659 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:32:02.384837   81659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:32:02.384891   81659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:32:02.422959   81659 cri.go:89] found id: ""
	I1105 19:32:02.423041   81659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:32:02.438194   81659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:32:02.447370   81659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:32:02.447389   81659 kubeadm.go:157] found existing configuration files:
	
	I1105 19:32:02.447429   81659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:32:02.456079   81659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:32:02.456141   81659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:32:02.464835   81659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:32:02.478309   81659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:32:02.478368   81659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:32:02.487402   81659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:32:02.496227   81659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:32:02.496299   81659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:32:02.504851   81659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:32:02.513659   81659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:32:02.513727   81659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:32:02.522638   81659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:32:02.531504   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:02.646467   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:03.820869   81659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.174370721s)
	I1105 19:32:03.820923   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:04.010895   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:04.075901   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:04.190578   81659 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:32:04.190666   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:04.691404   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:05.190958   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:05.691675   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:06.191211   81659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:32:06.219642   81659 api_server.go:72] duration metric: took 2.029059403s to wait for apiserver process to appear ...
	I1105 19:32:06.219675   81659 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:32:06.219697   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:09.161077   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:32:09.161116   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:32:09.161133   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:09.239308   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:32:09.239364   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:32:09.239383   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:09.243976   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:32:09.244011   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:32:09.720150   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:09.726085   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:32:09.726121   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:32:10.220364   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:10.225165   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:32:10.225197   81659 api_server.go:103] status: https://192.168.61.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:32:10.719905   81659 api_server.go:253] Checking apiserver healthz at https://192.168.61.217:8443/healthz ...
	I1105 19:32:10.724404   81659 api_server.go:279] https://192.168.61.217:8443/healthz returned 200:
	ok
	I1105 19:32:10.733201   81659 api_server.go:141] control plane version: v1.31.2
	I1105 19:32:10.733240   81659 api_server.go:131] duration metric: took 4.513557287s to wait for apiserver health ...
	I1105 19:32:10.733253   81659 cni.go:84] Creating CNI manager for ""
	I1105 19:32:10.733262   81659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:32:10.735931   81659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:32:10.737016   81659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:32:10.747735   81659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:32:10.765670   81659 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:32:10.786066   81659 system_pods.go:59] 8 kube-system pods found
	I1105 19:32:10.786123   81659 system_pods.go:61] "coredns-7c65d6cfc9-hccg9" [b5bbe8a2-e713-4521-9afb-59f262be9b77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:32:10.786136   81659 system_pods.go:61] "etcd-newest-cni-886087" [dba15c90-352f-4011-b36b-ebf22cf417c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:32:10.786149   81659 system_pods.go:61] "kube-apiserver-newest-cni-886087" [1a98004c-5d8a-4c9c-9dab-bd5f73ff1bb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:32:10.786158   81659 system_pods.go:61] "kube-controller-manager-newest-cni-886087" [792b0691-de62-4426-8b2f-af02dbbd5295] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:32:10.786172   81659 system_pods.go:61] "kube-proxy-pdcz9" [e61fb8e1-e5a0-4e43-a2e3-98ee59eea944] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 19:32:10.786182   81659 system_pods.go:61] "kube-scheduler-newest-cni-886087" [e4716c76-34c7-4e05-a818-5900f52d2143] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:32:10.786193   81659 system_pods.go:61] "metrics-server-6867b74b74-p7hsm" [abe65954-245a-488c-8392-0ae4c215110f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:32:10.786204   81659 system_pods.go:61] "storage-provisioner" [d738844c-6b07-46de-858e-a9c746ec926e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 19:32:10.786214   81659 system_pods.go:74] duration metric: took 20.515749ms to wait for pod list to return data ...
	I1105 19:32:10.786227   81659 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:32:10.798955   81659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:32:10.799009   81659 node_conditions.go:123] node cpu capacity is 2
	I1105 19:32:10.799027   81659 node_conditions.go:105] duration metric: took 12.791563ms to run NodePressure ...
	I1105 19:32:10.799052   81659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:32:11.070886   81659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:32:11.081971   81659 ops.go:34] apiserver oom_adj: -16
	I1105 19:32:11.081996   81659 kubeadm.go:597] duration metric: took 8.719543223s to restartPrimaryControlPlane
	I1105 19:32:11.082008   81659 kubeadm.go:394] duration metric: took 8.768988651s to StartCluster
	I1105 19:32:11.082029   81659 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:32:11.082116   81659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:32:11.084055   81659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:32:11.084351   81659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:32:11.084467   81659 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:32:11.084598   81659 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-886087"
	I1105 19:32:11.084600   81659 config.go:182] Loaded profile config "newest-cni-886087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:32:11.084616   81659 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-886087"
	I1105 19:32:11.084619   81659 addons.go:69] Setting default-storageclass=true in profile "newest-cni-886087"
	I1105 19:32:11.084637   81659 addons.go:69] Setting metrics-server=true in profile "newest-cni-886087"
	I1105 19:32:11.084651   81659 addons.go:234] Setting addon metrics-server=true in "newest-cni-886087"
	I1105 19:32:11.084652   81659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-886087"
	W1105 19:32:11.084658   81659 addons.go:243] addon metrics-server should already be in state true
	I1105 19:32:11.084658   81659 addons.go:69] Setting dashboard=true in profile "newest-cni-886087"
	I1105 19:32:11.084673   81659 addons.go:234] Setting addon dashboard=true in "newest-cni-886087"
	W1105 19:32:11.084687   81659 addons.go:243] addon dashboard should already be in state true
	I1105 19:32:11.084714   81659 host.go:66] Checking if "newest-cni-886087" exists ...
	I1105 19:32:11.084688   81659 host.go:66] Checking if "newest-cni-886087" exists ...
	W1105 19:32:11.084628   81659 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:32:11.084934   81659 host.go:66] Checking if "newest-cni-886087" exists ...
	I1105 19:32:11.085110   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.085139   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.085141   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.085153   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.085179   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.085303   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.085330   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.085366   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.086028   81659 out.go:177] * Verifying Kubernetes components...
	I1105 19:32:11.087435   81659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:32:11.102509   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I1105 19:32:11.103009   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.103610   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.103659   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.104310   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.104813   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I1105 19:32:11.104826   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I1105 19:32:11.104884   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.104959   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.105164   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.105269   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.105584   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.105607   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.105761   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.105790   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.106178   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.106260   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.106812   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.106861   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.106952   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.107023   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.108275   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40801
	I1105 19:32:11.108821   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.109349   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.109360   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.109644   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.109825   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:32:11.113368   81659 addons.go:234] Setting addon default-storageclass=true in "newest-cni-886087"
	W1105 19:32:11.113390   81659 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:32:11.113416   81659 host.go:66] Checking if "newest-cni-886087" exists ...
	I1105 19:32:11.113778   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.113847   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.123012   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33877
	I1105 19:32:11.123142   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43003
	I1105 19:32:11.143731   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.144508   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.144717   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.144732   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.145292   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.145312   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.146219   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.146396   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.146456   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:32:11.147267   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:32:11.148938   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:32:11.149134   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:32:11.151282   81659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:32:11.151429   81659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:32:11.153158   81659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:32:11.153184   81659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:32:11.153211   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:32:11.153250   81659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:32:11.153274   81659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:32:11.153292   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:32:11.157729   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.157971   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.158046   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:32:11.158074   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.158259   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:32:11.158466   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:32:11.158596   81659 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:31:48 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:32:11.158842   81659 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:32:11.158908   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:32:11.158937   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:32:11.159111   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:32:11.159129   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:32:11.159249   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:32:11.159333   81659 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:32:11.164510   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46881
	I1105 19:32:11.164902   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.165495   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.165521   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.165860   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.166039   81659 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:32:11.166201   81659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I1105 19:32:11.166663   81659 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:32:11.167188   81659 main.go:141] libmachine: Using API Version  1
	I1105 19:32:11.167205   81659 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:32:11.167578   81659 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:32:11.167731   81659 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:32:11.168138   81659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:32:11.168171   81659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:32:11.170255   81659 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1105 19:32:11.175613   81659 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.713201680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00a588ee-a000-4dff-b417-3be7e26262b5 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.715026261Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4629d5fc-6cfb-4fe4-83d6-d7dabc91b2d9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.715458650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835132715435935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4629d5fc-6cfb-4fe4-83d6-d7dabc91b2d9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.715971232Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36faf52b-44a5-4319-bd88-e184f4b4b970 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.716116090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36faf52b-44a5-4319-bd88-e184f4b4b970 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.716400271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0,PodSandboxId:d69437d54370af4580791d3d753e1371d85d9752b1d1000ab44d0c2232253123,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834154109601835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7dk86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170744f6-4b55-458d-a270-a8aa397c9cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62,PodSandboxId:5d590ebf53919034d806b1024bc11f193002bc1e833057cb3eb94f01f5e56977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730834153690549009,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 18a73546-576b-456e-9a91-a2a0d62880dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba,PodSandboxId:c4bf692d61b153ed23814c745b9be5d711943775633998f432e02bbbaac87237,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834153587148495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5vt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
be11308-47aa-454a-97bd-5e6c5145a99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a,PodSandboxId:23b8970401e7c1787f39c54f1afcfb2c4ffcc722a76d2dd726ce6aa6b52378ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730834152503786963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfxcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2910ec66-6528-4d00-91c0-588a93c54fcf,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9,PodSandboxId:ef4acb1221f6a191b1c550dddd4e330cbfa974491e7b885aeaaba7ff5b893ddc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730834141613931550,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860,PodSandboxId:5016aebbc1d6e44815d2a5a4cb176c24a2f02b6c471dead00c77ea2eb99a8b92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834141603531874,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a769b2c76c113f78e91812c836a9eeb3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24,PodSandboxId:d4bf9cb5df4bb8a122b1efc82206c3f5c2966b6eacf1a8cd72fa553f292f4d77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834141575994715,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c58f0c75dfcc12e2af2accc238b8f92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330,PodSandboxId:71c1d456dadcf077db65209420df38de9f2365d82e2c9f06bf1c0956fd1ff647,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834141516195318,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0de3c0318a9966d4c33dc7446e4e43c9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2930f9215487586727bd9eca76ad45143df71801516be459220e5ff8b75a38a,PodSandboxId:30ffff2e57828a95015778a477406d68377bd862f4d682ab3bccf27942f2fec1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833852785620668,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36faf52b-44a5-4319-bd88-e184f4b4b970 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.731790613Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4537ce20-37c9-4f37-b679-ebb93f418016 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.732200903Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d69437d54370af4580791d3d753e1371d85d9752b1d1000ab44d0c2232253123,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7dk86,Uid:170744f6-4b55-458d-a270-a8aa397c9cd3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730834153889287590,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7dk86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170744f6-4b55-458d-a270-a8aa397c9cd3,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05T19:15:52.081223254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d3677a61ff77fcf1d5e75d389400b6d09c649937c1616a680381b58b4fea031,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-tvl8v,Uid:fb0b97cb-ee9c-40cf-9fc1-defcd11fad19,Namespace
:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730834153598031276,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-tvl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0b97cb-ee9c-40cf-9fc1-defcd11fad19,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05T19:15:53.291261319Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d590ebf53919034d806b1024bc11f193002bc1e833057cb3eb94f01f5e56977,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:18a73546-576b-456e-9a91-a2a0d62880dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730834153547567747,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18a73546-576b-456e-9a91-a2a0d
62880dd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-11-05T19:15:53.236434798Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c4bf692d61b153ed23814c745b9be5d711943775633998f432e02bbbaac87237,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-v5vt6,Uid:ebe11308-47aa-454a
-97bd-5e6c5145a99a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730834153311190276,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5vt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebe11308-47aa-454a-97bd-5e6c5145a99a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05T19:15:52.097922765Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:23b8970401e7c1787f39c54f1afcfb2c4ffcc722a76d2dd726ce6aa6b52378ff,Metadata:&PodSandboxMetadata{Name:kube-proxy-nfxcj,Uid:2910ec66-6528-4d00-91c0-588a93c54fcf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730834152346874254,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nfxcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2910ec66-6528-4d00-91c0-588a93c54fcf,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-11-05T19:15:52.031604880Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5016aebbc1d6e44815d2a5a4cb176c24a2f02b6c471dead00c77ea2eb99a8b92,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-271881,Uid:a769b2c76c113f78e91812c836a9eeb3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730834141375932926,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a769b2c76c113f78e91812c836a9eeb3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a769b2c76c113f78e91812c836a9eeb3,kubernetes.io/config.seen: 2024-11-05T19:15:40.909017012Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4bf9cb5df4bb8a122b1efc82206c3f5c2966b6eacf1a8cd72fa553f292f4d77,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-271881,Uid:3c58f0c75dfcc12e2af2accc238b8f92,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730834141374860169,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c58f0c75dfcc12e2af2accc238b8f92,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3c58f0c75dfcc12e2af2accc238b8f92,kubernetes.io/config.seen: 2024-11-05T19:15:40.909016169Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ef4acb1221f6a191b1c550dddd4e330cbfa974491e7b885aeaaba7ff5b893ddc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-271881,Uid:0df6e857291d595f0499d37b8c2d93a9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730834141373859881,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.58:8443,kubernetes.io/config.hash: 0df6e857291d595f0499d37b8c2d93a9,kubernetes.io/config.seen: 2024-11-05T19:15:40.909014985Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:71c1d456dadcf077db65209420df38de9f2365d82e2c9f06bf1c0956fd1ff647,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-271881,Uid:0de3c0318a9966d4c33dc7446e4e43c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730834141363626945,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0de3c0318a9966d4c33dc7446e4e43c9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39
.58:2379,kubernetes.io/config.hash: 0de3c0318a9966d4c33dc7446e4e43c9,kubernetes.io/config.seen: 2024-11-05T19:15:40.909010951Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4537ce20-37c9-4f37-b679-ebb93f418016 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.732993021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b49ce16a-ccd9-43a4-8b09-e6c6368d1fb4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.733148281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b49ce16a-ccd9-43a4-8b09-e6c6368d1fb4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.733410389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0,PodSandboxId:d69437d54370af4580791d3d753e1371d85d9752b1d1000ab44d0c2232253123,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834154109601835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7dk86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170744f6-4b55-458d-a270-a8aa397c9cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62,PodSandboxId:5d590ebf53919034d806b1024bc11f193002bc1e833057cb3eb94f01f5e56977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730834153690549009,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 18a73546-576b-456e-9a91-a2a0d62880dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba,PodSandboxId:c4bf692d61b153ed23814c745b9be5d711943775633998f432e02bbbaac87237,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834153587148495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5vt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
be11308-47aa-454a-97bd-5e6c5145a99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a,PodSandboxId:23b8970401e7c1787f39c54f1afcfb2c4ffcc722a76d2dd726ce6aa6b52378ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730834152503786963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfxcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2910ec66-6528-4d00-91c0-588a93c54fcf,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9,PodSandboxId:ef4acb1221f6a191b1c550dddd4e330cbfa974491e7b885aeaaba7ff5b893ddc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730834141613931550,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860,PodSandboxId:5016aebbc1d6e44815d2a5a4cb176c24a2f02b6c471dead00c77ea2eb99a8b92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834141603531874,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a769b2c76c113f78e91812c836a9eeb3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24,PodSandboxId:d4bf9cb5df4bb8a122b1efc82206c3f5c2966b6eacf1a8cd72fa553f292f4d77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834141575994715,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c58f0c75dfcc12e2af2accc238b8f92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330,PodSandboxId:71c1d456dadcf077db65209420df38de9f2365d82e2c9f06bf1c0956fd1ff647,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834141516195318,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0de3c0318a9966d4c33dc7446e4e43c9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b49ce16a-ccd9-43a4-8b09-e6c6368d1fb4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.762260969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efe09cde-731e-43d5-a5ab-f94549e18fce name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.762391949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efe09cde-731e-43d5-a5ab-f94549e18fce name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.763505519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6aaa64a2-b18e-4e15-9f14-978e24010a6c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.764139740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835132764051229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6aaa64a2-b18e-4e15-9f14-978e24010a6c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.764714126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df0f2c5b-9f3f-4111-95b0-3ecf5f3f77ce name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.764767531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df0f2c5b-9f3f-4111-95b0-3ecf5f3f77ce name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.764952185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0,PodSandboxId:d69437d54370af4580791d3d753e1371d85d9752b1d1000ab44d0c2232253123,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834154109601835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7dk86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170744f6-4b55-458d-a270-a8aa397c9cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62,PodSandboxId:5d590ebf53919034d806b1024bc11f193002bc1e833057cb3eb94f01f5e56977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730834153690549009,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 18a73546-576b-456e-9a91-a2a0d62880dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba,PodSandboxId:c4bf692d61b153ed23814c745b9be5d711943775633998f432e02bbbaac87237,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834153587148495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5vt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
be11308-47aa-454a-97bd-5e6c5145a99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a,PodSandboxId:23b8970401e7c1787f39c54f1afcfb2c4ffcc722a76d2dd726ce6aa6b52378ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730834152503786963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfxcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2910ec66-6528-4d00-91c0-588a93c54fcf,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9,PodSandboxId:ef4acb1221f6a191b1c550dddd4e330cbfa974491e7b885aeaaba7ff5b893ddc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730834141613931550,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860,PodSandboxId:5016aebbc1d6e44815d2a5a4cb176c24a2f02b6c471dead00c77ea2eb99a8b92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834141603531874,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a769b2c76c113f78e91812c836a9eeb3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24,PodSandboxId:d4bf9cb5df4bb8a122b1efc82206c3f5c2966b6eacf1a8cd72fa553f292f4d77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834141575994715,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c58f0c75dfcc12e2af2accc238b8f92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330,PodSandboxId:71c1d456dadcf077db65209420df38de9f2365d82e2c9f06bf1c0956fd1ff647,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834141516195318,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0de3c0318a9966d4c33dc7446e4e43c9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2930f9215487586727bd9eca76ad45143df71801516be459220e5ff8b75a38a,PodSandboxId:30ffff2e57828a95015778a477406d68377bd862f4d682ab3bccf27942f2fec1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833852785620668,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df0f2c5b-9f3f-4111-95b0-3ecf5f3f77ce name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.800616795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd63e41c-0e45-4c4d-a2ba-6142f44304cd name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.800686295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd63e41c-0e45-4c4d-a2ba-6142f44304cd name=/runtime.v1.RuntimeService/Version
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.803385727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13d4d14e-6956-4798-afe5-b6fdcf2bb06e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.803794644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835132803770708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13d4d14e-6956-4798-afe5-b6fdcf2bb06e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.804422802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bda029b1-e5be-45d2-a235-9e207712bc13 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.804496513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bda029b1-e5be-45d2-a235-9e207712bc13 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:32:12 embed-certs-271881 crio[717]: time="2024-11-05 19:32:12.804778534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0,PodSandboxId:d69437d54370af4580791d3d753e1371d85d9752b1d1000ab44d0c2232253123,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834154109601835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7dk86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170744f6-4b55-458d-a270-a8aa397c9cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62,PodSandboxId:5d590ebf53919034d806b1024bc11f193002bc1e833057cb3eb94f01f5e56977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730834153690549009,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 18a73546-576b-456e-9a91-a2a0d62880dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba,PodSandboxId:c4bf692d61b153ed23814c745b9be5d711943775633998f432e02bbbaac87237,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834153587148495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5vt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
be11308-47aa-454a-97bd-5e6c5145a99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a,PodSandboxId:23b8970401e7c1787f39c54f1afcfb2c4ffcc722a76d2dd726ce6aa6b52378ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1730834152503786963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfxcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2910ec66-6528-4d00-91c0-588a93c54fcf,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9,PodSandboxId:ef4acb1221f6a191b1c550dddd4e330cbfa974491e7b885aeaaba7ff5b893ddc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730834141613931550,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860,PodSandboxId:5016aebbc1d6e44815d2a5a4cb176c24a2f02b6c471dead00c77ea2eb99a8b92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834141603531874,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a769b2c76c113f78e91812c836a9eeb3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24,PodSandboxId:d4bf9cb5df4bb8a122b1efc82206c3f5c2966b6eacf1a8cd72fa553f292f4d77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834141575994715,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c58f0c75dfcc12e2af2accc238b8f92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330,PodSandboxId:71c1d456dadcf077db65209420df38de9f2365d82e2c9f06bf1c0956fd1ff647,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834141516195318,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0de3c0318a9966d4c33dc7446e4e43c9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2930f9215487586727bd9eca76ad45143df71801516be459220e5ff8b75a38a,PodSandboxId:30ffff2e57828a95015778a477406d68377bd862f4d682ab3bccf27942f2fec1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833852785620668,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-271881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df6e857291d595f0499d37b8c2d93a9,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bda029b1-e5be-45d2-a235-9e207712bc13 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8d76c3e72e03c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   d69437d54370a       coredns-7c65d6cfc9-7dk86
	da920711eafbb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   5d590ebf53919       storage-provisioner
	ac3f242769735       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   c4bf692d61b15       coredns-7c65d6cfc9-v5vt6
	ff003c2d0bf73       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   16 minutes ago      Running             kube-proxy                0                   23b8970401e7c       kube-proxy-nfxcj
	e7a67250a75d4       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   16 minutes ago      Running             kube-apiserver            2                   ef4acb1221f6a       kube-apiserver-embed-certs-271881
	bb4479cf128df       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   16 minutes ago      Running             kube-scheduler            2                   5016aebbc1d6e       kube-scheduler-embed-certs-271881
	bfdf7a59551e2       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   16 minutes ago      Running             kube-controller-manager   2                   d4bf9cb5df4bb       kube-controller-manager-embed-certs-271881
	fa1edb4a8395e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   71c1d456dadcf       etcd-embed-certs-271881
	d2930f9215487       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 minutes ago      Exited              kube-apiserver            1                   30ffff2e57828       kube-apiserver-embed-certs-271881
	
	
	==> coredns [8d76c3e72e03cc5989d2c9fa303774dfba714e3c87a28c9d2c6a4f9a0ecd48b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ac3f24276973504773c4a744137a3f0e96c20be865550ce58fbafe47b3f46bba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-271881
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-271881
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=embed-certs-271881
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T19_15_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 19:15:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-271881
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 19:32:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 19:31:12 +0000   Tue, 05 Nov 2024 19:15:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 19:31:12 +0000   Tue, 05 Nov 2024 19:15:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 19:31:12 +0000   Tue, 05 Nov 2024 19:15:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 19:31:12 +0000   Tue, 05 Nov 2024 19:15:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    embed-certs-271881
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c03b11c8707426ab3b2acfa01fb5b0f
	  System UUID:                3c03b11c-8707-426a-b3b2-acfa01fb5b0f
	  Boot ID:                    d74b63b1-c0ce-4b62-8afd-2efa3b575194
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7dk86                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-v5vt6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-271881                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-271881             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-271881    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-nfxcj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-271881             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-tvl8v               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-271881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-271881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-271881 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-271881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-271881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-271881 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-271881 event: Registered Node embed-certs-271881 in Controller
	
	
	==> dmesg <==
	[  +0.051362] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.844763] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968429] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.527862] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.014342] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.061423] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073311] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.206820] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.145906] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.300218] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +3.972273] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +2.369719] systemd-fstab-generator[919]: Ignoring "noauto" option for root device
	[  +0.060546] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.537294] kauditd_printk_skb: 69 callbacks suppressed
	[Nov 5 19:11] kauditd_printk_skb: 85 callbacks suppressed
	[Nov 5 19:15] kauditd_printk_skb: 3 callbacks suppressed
	[  +2.087594] systemd-fstab-generator[2587]: Ignoring "noauto" option for root device
	[  +4.629052] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.443249] systemd-fstab-generator[2907]: Ignoring "noauto" option for root device
	[  +5.892741] systemd-fstab-generator[3027]: Ignoring "noauto" option for root device
	[  +0.097767] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.786644] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [fa1edb4a8395e691c233c119d4a0a6b3c4fd900511435779b9233aab844f2330] <==
	{"level":"info","ts":"2024-11-05T19:15:42.797754Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:15:42.800441Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:15:42.803269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-05T19:15:42.813573Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:15:42.816333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.58:2379"}
	{"level":"info","ts":"2024-11-05T19:15:42.816438Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T19:15:42.816465Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-05T19:25:42.877333Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":716}
	{"level":"info","ts":"2024-11-05T19:25:42.885568Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":716,"took":"7.524018ms","hash":3603352778,"current-db-size-bytes":2215936,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2215936,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-11-05T19:25:42.885700Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3603352778,"revision":716,"compact-revision":-1}
	{"level":"info","ts":"2024-11-05T19:30:42.887982Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2024-11-05T19:30:42.891646Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":959,"took":"3.025416ms","hash":1469225358,"current-db-size-bytes":2215936,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-11-05T19:30:42.891730Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1469225358,"revision":959,"compact-revision":716}
	{"level":"warn","ts":"2024-11-05T19:31:11.638133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.546175ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10757009328538023382 > lease_revoke:<id:154892fdc1127572>","response":"size:29"}
	{"level":"info","ts":"2024-11-05T19:31:11.899682Z","caller":"traceutil/trace.go:171","msg":"trace[1815633452] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"106.770626ms","start":"2024-11-05T19:31:11.792859Z","end":"2024-11-05T19:31:11.899630Z","steps":["trace[1815633452] 'process raft request'  (duration: 106.609147ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T19:31:12.743293Z","caller":"traceutil/trace.go:171","msg":"trace[934167593] linearizableReadLoop","detail":"{readStateIndex:1429; appliedIndex:1428; }","duration":"133.948775ms","start":"2024-11-05T19:31:12.609327Z","end":"2024-11-05T19:31:12.743276Z","steps":["trace[934167593] 'read index received'  (duration: 133.688212ms)","trace[934167593] 'applied index is now lower than readState.Index'  (duration: 260.067µs)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T19:31:12.743509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.14122ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T19:31:12.744193Z","caller":"traceutil/trace.go:171","msg":"trace[939536689] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1228; }","duration":"134.855676ms","start":"2024-11-05T19:31:12.609322Z","end":"2024-11-05T19:31:12.744178Z","steps":["trace[939536689] 'agreement among raft nodes before linearized reading'  (duration: 134.121518ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T19:31:12.743580Z","caller":"traceutil/trace.go:171","msg":"trace[1243119846] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"269.042963ms","start":"2024-11-05T19:31:12.474520Z","end":"2024-11-05T19:31:12.743563Z","steps":["trace[1243119846] 'process raft request'  (duration: 268.614072ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T19:31:13.218163Z","caller":"traceutil/trace.go:171","msg":"trace[168756883] linearizableReadLoop","detail":"{readStateIndex:1430; appliedIndex:1429; }","duration":"143.461858ms","start":"2024-11-05T19:31:13.074688Z","end":"2024-11-05T19:31:13.218149Z","steps":["trace[168756883] 'read index received'  (duration: 143.231153ms)","trace[168756883] 'applied index is now lower than readState.Index'  (duration: 230.359µs)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T19:31:13.218239Z","caller":"traceutil/trace.go:171","msg":"trace[1918247718] transaction","detail":"{read_only:false; response_revision:1229; number_of_response:1; }","duration":"359.53703ms","start":"2024-11-05T19:31:12.858696Z","end":"2024-11-05T19:31:13.218233Z","steps":["trace[1918247718] 'process raft request'  (duration: 359.28752ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T19:31:13.218671Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T19:31:12.858680Z","time spent":"359.577836ms","remote":"127.0.0.1:52890","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5695,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/embed-certs-271881\" mod_revision:982 > success:<request_put:<key:\"/registry/minions/embed-certs-271881\" value_size:5651 >> failure:<request_range:<key:\"/registry/minions/embed-certs-271881\" > >"}
	{"level":"warn","ts":"2024-11-05T19:31:13.218695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.996724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-11-05T19:31:13.218899Z","caller":"traceutil/trace.go:171","msg":"trace[457642411] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:1229; }","duration":"144.207235ms","start":"2024-11-05T19:31:13.074683Z","end":"2024-11-05T19:31:13.218890Z","steps":["trace[457642411] 'agreement among raft nodes before linearized reading'  (duration: 143.976792ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T19:32:05.171345Z","caller":"traceutil/trace.go:171","msg":"trace[1206506122] transaction","detail":"{read_only:false; response_revision:1271; number_of_response:1; }","duration":"162.569465ms","start":"2024-11-05T19:32:05.008757Z","end":"2024-11-05T19:32:05.171326Z","steps":["trace[1206506122] 'process raft request'  (duration: 162.186385ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:32:13 up 21 min,  0 users,  load average: 0.18, 0.27, 0.20
	Linux embed-certs-271881 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d2930f9215487586727bd9eca76ad45143df71801516be459220e5ff8b75a38a] <==
	W1105 19:15:33.375683       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.392376       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.437009       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.442633       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.485861       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.516653       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.607332       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.746746       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:33.826314       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.140853       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.267607       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.391618       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.554531       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.584115       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.704758       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.734485       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.767702       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.820430       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.871270       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.927406       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.928813       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:37.971637       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:38.115410       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:38.169410       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:15:38.190606       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e7a67250a75d4806b1d1b4e66ade92f7d2c3c6307f2fcc99ba815108082b5ee9] <==
	I1105 19:28:45.383821       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:28:45.383896       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:30:44.381944       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:30:44.382189       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1105 19:30:45.383612       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:30:45.383679       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1105 19:30:45.383731       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:30:45.383805       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:30:45.384925       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:30:45.384973       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:31:45.385239       1 handler_proxy.go:99] no RequestInfo found in the context
	W1105 19:31:45.385253       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:31:45.385454       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1105 19:31:45.385508       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1105 19:31:45.386668       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:31:45.386707       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bfdf7a59551e2366152bb7bc90c88699bc0624f8d826126108e1188e29763b24] <==
	I1105 19:26:53.751891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="208.574µs"
	I1105 19:27:05.747540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="222.303µs"
	E1105 19:27:21.472627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:27:21.908942       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:27:51.479833       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:27:51.917519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:28:21.486700       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:28:21.925574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:28:51.493133       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:28:51.932864       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:29:21.500337       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:29:21.940991       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:29:51.507268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:29:51.949229       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:30:21.513392       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:30:21.956769       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:30:51.520495       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:30:51.964699       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:31:13.222701       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-271881"
	E1105 19:31:21.528893       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:31:21.973106       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:31:51.536669       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:31:51.981437       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:31:53.748381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="274.94µs"
	I1105 19:32:07.748357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="53.308µs"
	
	
	==> kube-proxy [ff003c2d0bf73f428b8c01c0fb77c0c7f2401b7d7ae8ca5f3a64af3a4043614a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 19:15:52.935446       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 19:15:52.965631       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E1105 19:15:52.965712       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 19:15:53.052204       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 19:15:53.052253       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 19:15:53.052313       1 server_linux.go:169] "Using iptables Proxier"
	I1105 19:15:53.055383       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 19:15:53.055733       1 server.go:483] "Version info" version="v1.31.2"
	I1105 19:15:53.055763       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:15:53.057672       1 config.go:199] "Starting service config controller"
	I1105 19:15:53.057703       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 19:15:53.057736       1 config.go:105] "Starting endpoint slice config controller"
	I1105 19:15:53.057742       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 19:15:53.062658       1 config.go:328] "Starting node config controller"
	I1105 19:15:53.062690       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 19:15:53.159347       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 19:15:53.159399       1 shared_informer.go:320] Caches are synced for service config
	I1105 19:15:53.165523       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bb4479cf128df155f7ae1f3e65a351202c3118f33ec4ffe7efe815673adb0860] <==
	W1105 19:15:44.418132       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1105 19:15:44.418156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:44.419673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 19:15:44.419711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:44.419770       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1105 19:15:44.419793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:44.419837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 19:15:44.419857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:44.419925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 19:15:44.419948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.305783       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 19:15:45.305829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.362906       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 19:15:45.362961       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1105 19:15:45.385488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 19:15:45.385614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.400602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 19:15:45.400722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.467673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 19:15:45.467875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.534873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 19:15:45.535004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:15:45.546258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 19:15:45.546385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1105 19:15:48.006373       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 19:31:12 embed-certs-271881 kubelet[2914]: E1105 19:31:12.733013    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:31:16 embed-certs-271881 kubelet[2914]: E1105 19:31:16.962821    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835076962335137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:16 embed-certs-271881 kubelet[2914]: E1105 19:31:16.963354    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835076962335137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:26 embed-certs-271881 kubelet[2914]: E1105 19:31:26.965980    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835086965512596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:26 embed-certs-271881 kubelet[2914]: E1105 19:31:26.966083    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835086965512596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:27 embed-certs-271881 kubelet[2914]: E1105 19:31:27.732638    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:31:36 embed-certs-271881 kubelet[2914]: E1105 19:31:36.967688    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835096967342423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:36 embed-certs-271881 kubelet[2914]: E1105 19:31:36.968027    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835096967342423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:42 embed-certs-271881 kubelet[2914]: E1105 19:31:42.748031    2914 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 05 19:31:42 embed-certs-271881 kubelet[2914]: E1105 19:31:42.748140    2914 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 05 19:31:42 embed-certs-271881 kubelet[2914]: E1105 19:31:42.748370    2914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwqxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-tvl8v_kube-system(fb0b97cb-ee9c-40cf-9fc1-defcd11fad19): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Nov 05 19:31:42 embed-certs-271881 kubelet[2914]: E1105 19:31:42.749782    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:31:46 embed-certs-271881 kubelet[2914]: E1105 19:31:46.766794    2914 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 19:31:46 embed-certs-271881 kubelet[2914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 19:31:46 embed-certs-271881 kubelet[2914]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 19:31:46 embed-certs-271881 kubelet[2914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 19:31:46 embed-certs-271881 kubelet[2914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 19:31:46 embed-certs-271881 kubelet[2914]: E1105 19:31:46.970986    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835106970551952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:46 embed-certs-271881 kubelet[2914]: E1105 19:31:46.971024    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835106970551952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:53 embed-certs-271881 kubelet[2914]: E1105 19:31:53.732137    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	Nov 05 19:31:56 embed-certs-271881 kubelet[2914]: E1105 19:31:56.972912    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835116972613271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:56 embed-certs-271881 kubelet[2914]: E1105 19:31:56.973254    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835116972613271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:06 embed-certs-271881 kubelet[2914]: E1105 19:32:06.975133    2914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835126974674133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:06 embed-certs-271881 kubelet[2914]: E1105 19:32:06.975414    2914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835126974674133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:32:07 embed-certs-271881 kubelet[2914]: E1105 19:32:07.732872    2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tvl8v" podUID="fb0b97cb-ee9c-40cf-9fc1-defcd11fad19"
	
	
	==> storage-provisioner [da920711eafbb6aaa9ad8474804ee8531529f6737016a6333540c108e6a1be62] <==
	I1105 19:15:53.864950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 19:15:53.881914       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 19:15:53.882242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 19:15:53.931880       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 19:15:53.932507       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff0ef236-a5af-41c4-bd6f-5115de9de6bb", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-271881_b401f14b-02e0-4c5c-ab66-b1af16c5a036 became leader
	I1105 19:15:53.933296       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-271881_b401f14b-02e0-4c5c-ab66-b1af16c5a036!
	I1105 19:15:54.033736       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-271881_b401f14b-02e0-4c5c-ab66-b1af16c5a036!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-271881 -n embed-certs-271881
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-271881 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tvl8v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-271881 describe pod metrics-server-6867b74b74-tvl8v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-271881 describe pod metrics-server-6867b74b74-tvl8v: exit status 1 (63.601491ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tvl8v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-271881 describe pod metrics-server-6867b74b74-tvl8v: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (425.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (314.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-459223 -n no-preload-459223
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-11-05 19:31:18.658870126 +0000 UTC m=+6612.331594900
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-459223 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-459223 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.062µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-459223 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-459223 -n no-preload-459223
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-459223 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-459223 logs -n 25: (1.243212886s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo find                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo crio                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-929548                                       | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-537175 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | disable-driver-mounts-537175                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:04 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-459223             | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-271881            | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-608095  | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC | 05 Nov 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-459223                  | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-271881                 | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-567666        | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-608095       | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:15 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-567666             | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:30 UTC | 05 Nov 24 19:30 UTC |
	| start   | -p newest-cni-886087 --memory=2200 --alsologtostderr   | newest-cni-886087            | jenkins | v1.34.0 | 05 Nov 24 19:30 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 19:30:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 19:30:39.236515   80934 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:30:39.236620   80934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:30:39.236628   80934 out.go:358] Setting ErrFile to fd 2...
	I1105 19:30:39.236633   80934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:30:39.236797   80934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:30:39.237375   80934 out.go:352] Setting JSON to false
	I1105 19:30:39.238313   80934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7981,"bootTime":1730827058,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:30:39.238423   80934 start.go:139] virtualization: kvm guest
	I1105 19:30:39.240669   80934 out.go:177] * [newest-cni-886087] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:30:39.242017   80934 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:30:39.242104   80934 notify.go:220] Checking for updates...
	I1105 19:30:39.244319   80934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:30:39.245568   80934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:30:39.246883   80934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:30:39.248194   80934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:30:39.249405   80934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:30:39.251189   80934 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:30:39.251288   80934 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:30:39.251415   80934 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:30:39.251498   80934 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:30:39.289151   80934 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 19:30:39.290316   80934 start.go:297] selected driver: kvm2
	I1105 19:30:39.290327   80934 start.go:901] validating driver "kvm2" against <nil>
	I1105 19:30:39.290338   80934 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:30:39.291093   80934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:30:39.291183   80934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:30:39.306096   80934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:30:39.306147   80934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1105 19:30:39.306196   80934 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1105 19:30:39.306385   80934 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1105 19:30:39.306411   80934 cni.go:84] Creating CNI manager for ""
	I1105 19:30:39.306452   80934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:30:39.306459   80934 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 19:30:39.306500   80934 start.go:340] cluster config:
	{Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:30:39.306601   80934 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:30:39.308101   80934 out.go:177] * Starting "newest-cni-886087" primary control-plane node in "newest-cni-886087" cluster
	I1105 19:30:39.309268   80934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:30:39.309298   80934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 19:30:39.309307   80934 cache.go:56] Caching tarball of preloaded images
	I1105 19:30:39.309389   80934 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:30:39.309402   80934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 19:30:39.309489   80934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/config.json ...
	I1105 19:30:39.309506   80934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/config.json: {Name:mkb7798f043fc0f3afda4894063ba961df21ac5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:30:39.309656   80934 start.go:360] acquireMachinesLock for newest-cni-886087: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:30:39.309703   80934 start.go:364] duration metric: took 30.496µs to acquireMachinesLock for "newest-cni-886087"
	I1105 19:30:39.309726   80934 start.go:93] Provisioning new machine with config: &{Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:30:39.309787   80934 start.go:125] createHost starting for "" (driver="kvm2")
	I1105 19:30:39.311259   80934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 19:30:39.311384   80934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:30:39.311421   80934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:30:39.325861   80934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44917
	I1105 19:30:39.326285   80934 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:30:39.326871   80934 main.go:141] libmachine: Using API Version  1
	I1105 19:30:39.326893   80934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:30:39.327239   80934 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:30:39.327404   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:30:39.327567   80934 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:30:39.327686   80934 start.go:159] libmachine.API.Create for "newest-cni-886087" (driver="kvm2")
	I1105 19:30:39.327718   80934 client.go:168] LocalClient.Create starting
	I1105 19:30:39.327746   80934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem
	I1105 19:30:39.327775   80934 main.go:141] libmachine: Decoding PEM data...
	I1105 19:30:39.327792   80934 main.go:141] libmachine: Parsing certificate...
	I1105 19:30:39.327839   80934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem
	I1105 19:30:39.327865   80934 main.go:141] libmachine: Decoding PEM data...
	I1105 19:30:39.327879   80934 main.go:141] libmachine: Parsing certificate...
	I1105 19:30:39.327894   80934 main.go:141] libmachine: Running pre-create checks...
	I1105 19:30:39.327902   80934 main.go:141] libmachine: (newest-cni-886087) Calling .PreCreateCheck
	I1105 19:30:39.328235   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetConfigRaw
	I1105 19:30:39.328552   80934 main.go:141] libmachine: Creating machine...
	I1105 19:30:39.328564   80934 main.go:141] libmachine: (newest-cni-886087) Calling .Create
	I1105 19:30:39.328697   80934 main.go:141] libmachine: (newest-cni-886087) Creating KVM machine...
	I1105 19:30:39.329828   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found existing default KVM network
	I1105 19:30:39.331040   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:39.330857   80957 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:75:26:5b} reservation:<nil>}
	I1105 19:30:39.331846   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:39.331792   80957 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9f:eb:7d} reservation:<nil>}
	I1105 19:30:39.332912   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:39.332835   80957 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030c870}
	I1105 19:30:39.332934   80934 main.go:141] libmachine: (newest-cni-886087) DBG | created network xml: 
	I1105 19:30:39.332945   80934 main.go:141] libmachine: (newest-cni-886087) DBG | <network>
	I1105 19:30:39.332952   80934 main.go:141] libmachine: (newest-cni-886087) DBG |   <name>mk-newest-cni-886087</name>
	I1105 19:30:39.332966   80934 main.go:141] libmachine: (newest-cni-886087) DBG |   <dns enable='no'/>
	I1105 19:30:39.332979   80934 main.go:141] libmachine: (newest-cni-886087) DBG |   
	I1105 19:30:39.332994   80934 main.go:141] libmachine: (newest-cni-886087) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1105 19:30:39.333004   80934 main.go:141] libmachine: (newest-cni-886087) DBG |     <dhcp>
	I1105 19:30:39.333022   80934 main.go:141] libmachine: (newest-cni-886087) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1105 19:30:39.333031   80934 main.go:141] libmachine: (newest-cni-886087) DBG |     </dhcp>
	I1105 19:30:39.333040   80934 main.go:141] libmachine: (newest-cni-886087) DBG |   </ip>
	I1105 19:30:39.333048   80934 main.go:141] libmachine: (newest-cni-886087) DBG |   
	I1105 19:30:39.333067   80934 main.go:141] libmachine: (newest-cni-886087) DBG | </network>
	I1105 19:30:39.333079   80934 main.go:141] libmachine: (newest-cni-886087) DBG | 
	I1105 19:30:39.338675   80934 main.go:141] libmachine: (newest-cni-886087) DBG | trying to create private KVM network mk-newest-cni-886087 192.168.61.0/24...
	I1105 19:30:39.407204   80934 main.go:141] libmachine: (newest-cni-886087) DBG | private KVM network mk-newest-cni-886087 192.168.61.0/24 created
	I1105 19:30:39.407256   80934 main.go:141] libmachine: (newest-cni-886087) Setting up store path in /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087 ...
	I1105 19:30:39.407278   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:39.407203   80957 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:30:39.407295   80934 main.go:141] libmachine: (newest-cni-886087) Building disk image from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 19:30:39.407370   80934 main.go:141] libmachine: (newest-cni-886087) Downloading /home/jenkins/minikube-integration/19910-8296/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 19:30:39.656872   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:39.656761   80957 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa...
	I1105 19:30:39.945564   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:39.945455   80957 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/newest-cni-886087.rawdisk...
	I1105 19:30:39.945586   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Writing magic tar header
	I1105 19:30:39.945601   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Writing SSH key tar header
	I1105 19:30:39.945611   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:39.945567   80957 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087 ...
	I1105 19:30:39.945649   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087
	I1105 19:30:39.945686   80934 main.go:141] libmachine: (newest-cni-886087) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087 (perms=drwx------)
	I1105 19:30:39.945702   80934 main.go:141] libmachine: (newest-cni-886087) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube/machines (perms=drwxr-xr-x)
	I1105 19:30:39.945721   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube/machines
	I1105 19:30:39.945733   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:30:39.945743   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19910-8296
	I1105 19:30:39.945753   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1105 19:30:39.945779   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Checking permissions on dir: /home/jenkins
	I1105 19:30:39.945793   80934 main.go:141] libmachine: (newest-cni-886087) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296/.minikube (perms=drwxr-xr-x)
	I1105 19:30:39.945804   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Checking permissions on dir: /home
	I1105 19:30:39.945817   80934 main.go:141] libmachine: (newest-cni-886087) Setting executable bit set on /home/jenkins/minikube-integration/19910-8296 (perms=drwxrwxr-x)
	I1105 19:30:39.945832   80934 main.go:141] libmachine: (newest-cni-886087) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1105 19:30:39.945844   80934 main.go:141] libmachine: (newest-cni-886087) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1105 19:30:39.945852   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Skipping /home - not owner
	I1105 19:30:39.945861   80934 main.go:141] libmachine: (newest-cni-886087) Creating domain...
	I1105 19:30:39.946828   80934 main.go:141] libmachine: (newest-cni-886087) define libvirt domain using xml: 
	I1105 19:30:39.946845   80934 main.go:141] libmachine: (newest-cni-886087) <domain type='kvm'>
	I1105 19:30:39.946852   80934 main.go:141] libmachine: (newest-cni-886087)   <name>newest-cni-886087</name>
	I1105 19:30:39.946857   80934 main.go:141] libmachine: (newest-cni-886087)   <memory unit='MiB'>2200</memory>
	I1105 19:30:39.946862   80934 main.go:141] libmachine: (newest-cni-886087)   <vcpu>2</vcpu>
	I1105 19:30:39.946865   80934 main.go:141] libmachine: (newest-cni-886087)   <features>
	I1105 19:30:39.946870   80934 main.go:141] libmachine: (newest-cni-886087)     <acpi/>
	I1105 19:30:39.946892   80934 main.go:141] libmachine: (newest-cni-886087)     <apic/>
	I1105 19:30:39.946904   80934 main.go:141] libmachine: (newest-cni-886087)     <pae/>
	I1105 19:30:39.946914   80934 main.go:141] libmachine: (newest-cni-886087)     
	I1105 19:30:39.946922   80934 main.go:141] libmachine: (newest-cni-886087)   </features>
	I1105 19:30:39.946936   80934 main.go:141] libmachine: (newest-cni-886087)   <cpu mode='host-passthrough'>
	I1105 19:30:39.946941   80934 main.go:141] libmachine: (newest-cni-886087)   
	I1105 19:30:39.946945   80934 main.go:141] libmachine: (newest-cni-886087)   </cpu>
	I1105 19:30:39.946956   80934 main.go:141] libmachine: (newest-cni-886087)   <os>
	I1105 19:30:39.946963   80934 main.go:141] libmachine: (newest-cni-886087)     <type>hvm</type>
	I1105 19:30:39.946989   80934 main.go:141] libmachine: (newest-cni-886087)     <boot dev='cdrom'/>
	I1105 19:30:39.947005   80934 main.go:141] libmachine: (newest-cni-886087)     <boot dev='hd'/>
	I1105 19:30:39.947067   80934 main.go:141] libmachine: (newest-cni-886087)     <bootmenu enable='no'/>
	I1105 19:30:39.947094   80934 main.go:141] libmachine: (newest-cni-886087)   </os>
	I1105 19:30:39.947105   80934 main.go:141] libmachine: (newest-cni-886087)   <devices>
	I1105 19:30:39.947115   80934 main.go:141] libmachine: (newest-cni-886087)     <disk type='file' device='cdrom'>
	I1105 19:30:39.947128   80934 main.go:141] libmachine: (newest-cni-886087)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/boot2docker.iso'/>
	I1105 19:30:39.947141   80934 main.go:141] libmachine: (newest-cni-886087)       <target dev='hdc' bus='scsi'/>
	I1105 19:30:39.947147   80934 main.go:141] libmachine: (newest-cni-886087)       <readonly/>
	I1105 19:30:39.947153   80934 main.go:141] libmachine: (newest-cni-886087)     </disk>
	I1105 19:30:39.947166   80934 main.go:141] libmachine: (newest-cni-886087)     <disk type='file' device='disk'>
	I1105 19:30:39.947178   80934 main.go:141] libmachine: (newest-cni-886087)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1105 19:30:39.947199   80934 main.go:141] libmachine: (newest-cni-886087)       <source file='/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/newest-cni-886087.rawdisk'/>
	I1105 19:30:39.947217   80934 main.go:141] libmachine: (newest-cni-886087)       <target dev='hda' bus='virtio'/>
	I1105 19:30:39.947232   80934 main.go:141] libmachine: (newest-cni-886087)     </disk>
	I1105 19:30:39.947245   80934 main.go:141] libmachine: (newest-cni-886087)     <interface type='network'>
	I1105 19:30:39.947257   80934 main.go:141] libmachine: (newest-cni-886087)       <source network='mk-newest-cni-886087'/>
	I1105 19:30:39.947268   80934 main.go:141] libmachine: (newest-cni-886087)       <model type='virtio'/>
	I1105 19:30:39.947278   80934 main.go:141] libmachine: (newest-cni-886087)     </interface>
	I1105 19:30:39.947290   80934 main.go:141] libmachine: (newest-cni-886087)     <interface type='network'>
	I1105 19:30:39.947304   80934 main.go:141] libmachine: (newest-cni-886087)       <source network='default'/>
	I1105 19:30:39.947317   80934 main.go:141] libmachine: (newest-cni-886087)       <model type='virtio'/>
	I1105 19:30:39.947326   80934 main.go:141] libmachine: (newest-cni-886087)     </interface>
	I1105 19:30:39.947334   80934 main.go:141] libmachine: (newest-cni-886087)     <serial type='pty'>
	I1105 19:30:39.947344   80934 main.go:141] libmachine: (newest-cni-886087)       <target port='0'/>
	I1105 19:30:39.947352   80934 main.go:141] libmachine: (newest-cni-886087)     </serial>
	I1105 19:30:39.947361   80934 main.go:141] libmachine: (newest-cni-886087)     <console type='pty'>
	I1105 19:30:39.947370   80934 main.go:141] libmachine: (newest-cni-886087)       <target type='serial' port='0'/>
	I1105 19:30:39.947381   80934 main.go:141] libmachine: (newest-cni-886087)     </console>
	I1105 19:30:39.947389   80934 main.go:141] libmachine: (newest-cni-886087)     <rng model='virtio'>
	I1105 19:30:39.947398   80934 main.go:141] libmachine: (newest-cni-886087)       <backend model='random'>/dev/random</backend>
	I1105 19:30:39.947405   80934 main.go:141] libmachine: (newest-cni-886087)     </rng>
	I1105 19:30:39.947412   80934 main.go:141] libmachine: (newest-cni-886087)     
	I1105 19:30:39.947419   80934 main.go:141] libmachine: (newest-cni-886087)     
	I1105 19:30:39.947425   80934 main.go:141] libmachine: (newest-cni-886087)   </devices>
	I1105 19:30:39.947432   80934 main.go:141] libmachine: (newest-cni-886087) </domain>
	I1105 19:30:39.947438   80934 main.go:141] libmachine: (newest-cni-886087) 
	I1105 19:30:39.951600   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:26:1b:f4 in network default
	I1105 19:30:39.952165   80934 main.go:141] libmachine: (newest-cni-886087) Ensuring networks are active...
	I1105 19:30:39.952184   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:39.953132   80934 main.go:141] libmachine: (newest-cni-886087) Ensuring network default is active
	I1105 19:30:39.953399   80934 main.go:141] libmachine: (newest-cni-886087) Ensuring network mk-newest-cni-886087 is active
	I1105 19:30:39.953909   80934 main.go:141] libmachine: (newest-cni-886087) Getting domain xml...
	I1105 19:30:39.954583   80934 main.go:141] libmachine: (newest-cni-886087) Creating domain...
	I1105 19:30:41.246364   80934 main.go:141] libmachine: (newest-cni-886087) Waiting to get IP...
	I1105 19:30:41.247172   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:41.247623   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:41.247667   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:41.247594   80957 retry.go:31] will retry after 215.028706ms: waiting for machine to come up
	I1105 19:30:41.464176   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:41.464820   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:41.464845   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:41.464780   80957 retry.go:31] will retry after 349.878548ms: waiting for machine to come up
	I1105 19:30:41.816419   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:41.816959   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:41.816982   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:41.816912   80957 retry.go:31] will retry after 432.924557ms: waiting for machine to come up
	I1105 19:30:42.251344   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:42.251938   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:42.251965   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:42.251853   80957 retry.go:31] will retry after 425.555903ms: waiting for machine to come up
	I1105 19:30:42.679656   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:42.680092   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:42.680123   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:42.680054   80957 retry.go:31] will retry after 660.971122ms: waiting for machine to come up
	I1105 19:30:43.342949   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:43.343442   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:43.343467   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:43.343396   80957 retry.go:31] will retry after 597.490095ms: waiting for machine to come up
	I1105 19:30:43.942577   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:43.942941   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:43.942979   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:43.942884   80957 retry.go:31] will retry after 815.475691ms: waiting for machine to come up
	I1105 19:30:44.759567   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:44.760070   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:44.760098   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:44.760027   80957 retry.go:31] will retry after 1.336251815s: waiting for machine to come up
	I1105 19:30:46.098336   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:46.098825   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:46.098845   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:46.098774   80957 retry.go:31] will retry after 1.345607966s: waiting for machine to come up
	I1105 19:30:47.445842   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:47.446360   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:47.446388   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:47.446295   80957 retry.go:31] will retry after 2.24618582s: waiting for machine to come up
	I1105 19:30:49.694029   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:49.694595   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:49.694619   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:49.694544   80957 retry.go:31] will retry after 2.397381734s: waiting for machine to come up
	I1105 19:30:52.093034   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:52.093438   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:52.093454   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:52.093403   80957 retry.go:31] will retry after 2.550799982s: waiting for machine to come up
	I1105 19:30:54.645488   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:54.645988   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:54.646018   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:54.645952   80957 retry.go:31] will retry after 2.952027666s: waiting for machine to come up
	I1105 19:30:57.600126   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:30:57.600515   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find current IP address of domain newest-cni-886087 in network mk-newest-cni-886087
	I1105 19:30:57.600561   80934 main.go:141] libmachine: (newest-cni-886087) DBG | I1105 19:30:57.600497   80957 retry.go:31] will retry after 5.327421501s: waiting for machine to come up
	I1105 19:31:02.929261   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:02.929661   80934 main.go:141] libmachine: (newest-cni-886087) Found IP for machine: 192.168.61.217
	I1105 19:31:02.929680   80934 main.go:141] libmachine: (newest-cni-886087) Reserving static IP address...
	I1105 19:31:02.929688   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has current primary IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:02.930081   80934 main.go:141] libmachine: (newest-cni-886087) DBG | unable to find host DHCP lease matching {name: "newest-cni-886087", mac: "52:54:00:c0:46:5f", ip: "192.168.61.217"} in network mk-newest-cni-886087
	I1105 19:31:03.007534   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Getting to WaitForSSH function...
	I1105 19:31:03.007567   80934 main.go:141] libmachine: (newest-cni-886087) Reserved static IP address: 192.168.61.217
	I1105 19:31:03.007630   80934 main.go:141] libmachine: (newest-cni-886087) Waiting for SSH to be available...
	I1105 19:31:03.010234   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.010708   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:03.010735   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.010927   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Using SSH client type: external
	I1105 19:31:03.010951   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa (-rw-------)
	I1105 19:31:03.011024   80934 main.go:141] libmachine: (newest-cni-886087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:31:03.011048   80934 main.go:141] libmachine: (newest-cni-886087) DBG | About to run SSH command:
	I1105 19:31:03.011060   80934 main.go:141] libmachine: (newest-cni-886087) DBG | exit 0
	I1105 19:31:03.135089   80934 main.go:141] libmachine: (newest-cni-886087) DBG | SSH cmd err, output: <nil>: 
	I1105 19:31:03.135325   80934 main.go:141] libmachine: (newest-cni-886087) KVM machine creation complete!
	I1105 19:31:03.135648   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetConfigRaw
	I1105 19:31:03.136173   80934 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:03.136352   80934 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:03.136501   80934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 19:31:03.136514   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetState
	I1105 19:31:03.137901   80934 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 19:31:03.137916   80934 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 19:31:03.137928   80934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 19:31:03.137938   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:03.140331   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.140692   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:03.140718   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.140866   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:03.141041   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:03.141217   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:03.141349   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:03.141562   80934 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:03.141738   80934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:03.141748   80934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 19:31:03.246494   80934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:31:03.246516   80934 main.go:141] libmachine: Detecting the provisioner...
	I1105 19:31:03.246523   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:03.249172   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.249619   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:03.249650   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.249785   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:03.249981   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:03.250225   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:03.250404   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:03.250635   80934 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:03.250882   80934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:03.250894   80934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 19:31:03.355321   80934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 19:31:03.355381   80934 main.go:141] libmachine: found compatible host: buildroot
	I1105 19:31:03.355388   80934 main.go:141] libmachine: Provisioning with buildroot...
	I1105 19:31:03.355396   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:31:03.355644   80934 buildroot.go:166] provisioning hostname "newest-cni-886087"
	I1105 19:31:03.355675   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:31:03.355906   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:03.358630   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.359010   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:03.359053   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.359174   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:03.359349   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:03.359504   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:03.359590   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:03.359706   80934 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:03.359920   80934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:03.359945   80934 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-886087 && echo "newest-cni-886087" | sudo tee /etc/hostname
	I1105 19:31:03.481322   80934 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-886087
	
	I1105 19:31:03.481354   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:03.484070   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.484360   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:03.484389   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.484540   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:03.484706   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:03.484868   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:03.485021   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:03.485202   80934 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:03.485411   80934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:03.485428   80934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-886087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-886087/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-886087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:31:03.600308   80934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:31:03.600340   80934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:31:03.600363   80934 buildroot.go:174] setting up certificates
	I1105 19:31:03.600376   80934 provision.go:84] configureAuth start
	I1105 19:31:03.600393   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetMachineName
	I1105 19:31:03.600690   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:03.603486   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.603794   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:03.603824   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.603933   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:03.606007   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.606357   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:03.606393   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.606522   80934 provision.go:143] copyHostCerts
	I1105 19:31:03.606598   80934 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:31:03.606621   80934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:31:03.606698   80934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:31:03.606779   80934 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:31:03.606788   80934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:31:03.606813   80934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:31:03.606862   80934 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:31:03.606868   80934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:31:03.606889   80934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:31:03.606936   80934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.newest-cni-886087 san=[127.0.0.1 192.168.61.217 localhost minikube newest-cni-886087]
	I1105 19:31:03.899288   80934 provision.go:177] copyRemoteCerts
	I1105 19:31:03.899360   80934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:31:03.899389   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:03.902097   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.902405   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:03.902434   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:03.902603   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:03.902808   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:03.902985   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:03.903115   80934 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:03.984749   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:31:04.009033   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1105 19:31:04.034628   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 19:31:04.060257   80934 provision.go:87] duration metric: took 459.864341ms to configureAuth
	I1105 19:31:04.060283   80934 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:31:04.060484   80934 config.go:182] Loaded profile config "newest-cni-886087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:31:04.060563   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:04.063196   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.063543   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:04.063568   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.063718   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:04.063898   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:04.064055   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:04.064208   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:04.064355   80934 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:04.064551   80934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:04.064573   80934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:31:04.282821   80934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:31:04.282843   80934 main.go:141] libmachine: Checking connection to Docker...
	I1105 19:31:04.282851   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetURL
	I1105 19:31:04.284215   80934 main.go:141] libmachine: (newest-cni-886087) DBG | Using libvirt version 6000000
	I1105 19:31:04.286606   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.287102   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:04.287128   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.287325   80934 main.go:141] libmachine: Docker is up and running!
	I1105 19:31:04.287338   80934 main.go:141] libmachine: Reticulating splines...
	I1105 19:31:04.287344   80934 client.go:171] duration metric: took 24.959616901s to LocalClient.Create
	I1105 19:31:04.287363   80934 start.go:167] duration metric: took 24.959677443s to libmachine.API.Create "newest-cni-886087"
	I1105 19:31:04.287394   80934 start.go:293] postStartSetup for "newest-cni-886087" (driver="kvm2")
	I1105 19:31:04.287413   80934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:31:04.287436   80934 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:04.287714   80934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:31:04.287744   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:04.289672   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.289952   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:04.289977   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.290124   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:04.290283   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:04.290427   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:04.290557   80934 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:04.385852   80934 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:31:04.389953   80934 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:31:04.389979   80934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:31:04.390058   80934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:31:04.390148   80934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:31:04.390268   80934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:31:04.399999   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:31:04.426256   80934 start.go:296] duration metric: took 138.842165ms for postStartSetup
	I1105 19:31:04.426354   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetConfigRaw
	I1105 19:31:04.426941   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:04.429754   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.430177   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:04.430210   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.430466   80934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/config.json ...
	I1105 19:31:04.430653   80934 start.go:128] duration metric: took 25.120856422s to createHost
	I1105 19:31:04.430688   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:04.433003   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.433337   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:04.433376   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.433485   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:04.433658   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:04.433805   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:04.433960   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:04.434125   80934 main.go:141] libmachine: Using SSH client type: native
	I1105 19:31:04.434295   80934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.217 22 <nil> <nil>}
	I1105 19:31:04.434308   80934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:31:04.543725   80934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730835064.522617036
	
	I1105 19:31:04.543746   80934 fix.go:216] guest clock: 1730835064.522617036
	I1105 19:31:04.543753   80934 fix.go:229] Guest: 2024-11-05 19:31:04.522617036 +0000 UTC Remote: 2024-11-05 19:31:04.430665976 +0000 UTC m=+25.230866077 (delta=91.95106ms)
	I1105 19:31:04.543770   80934 fix.go:200] guest clock delta is within tolerance: 91.95106ms
	I1105 19:31:04.543776   80934 start.go:83] releasing machines lock for "newest-cni-886087", held for 25.234061918s
	I1105 19:31:04.543795   80934 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:04.544057   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:04.546847   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.547194   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:04.547221   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.547399   80934 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:04.547871   80934 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:04.548062   80934 main.go:141] libmachine: (newest-cni-886087) Calling .DriverName
	I1105 19:31:04.548156   80934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:31:04.548214   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:04.548279   80934 ssh_runner.go:195] Run: cat /version.json
	I1105 19:31:04.548429   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHHostname
	I1105 19:31:04.550945   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.551262   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:04.551299   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.551319   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.551612   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:04.551759   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:04.551794   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:04.551818   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:04.551891   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:04.551967   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHPort
	I1105 19:31:04.552016   80934 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:04.552084   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHKeyPath
	I1105 19:31:04.552195   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetSSHUsername
	I1105 19:31:04.552296   80934 sshutil.go:53] new ssh client: &{IP:192.168.61.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/newest-cni-886087/id_rsa Username:docker}
	I1105 19:31:04.627678   80934 ssh_runner.go:195] Run: systemctl --version
	I1105 19:31:04.661368   80934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:31:04.823666   80934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:31:04.829317   80934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:31:04.829387   80934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:31:04.844509   80934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:31:04.844532   80934 start.go:495] detecting cgroup driver to use...
	I1105 19:31:04.844587   80934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:31:04.859250   80934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:31:04.874903   80934 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:31:04.874962   80934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:31:04.889136   80934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:31:04.902112   80934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:31:05.017595   80934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:31:05.181182   80934 docker.go:233] disabling docker service ...
	I1105 19:31:05.181247   80934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:31:05.195459   80934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:31:05.208286   80934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:31:05.325823   80934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:31:05.457534   80934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:31:05.471489   80934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:31:05.489442   80934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:31:05.489511   80934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:05.500059   80934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:31:05.500135   80934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:05.510654   80934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:05.520188   80934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:05.529780   80934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:31:05.540085   80934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:05.549250   80934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:05.565075   80934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:31:05.574944   80934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:31:05.584426   80934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:31:05.584495   80934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:31:05.596794   80934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:31:05.606447   80934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:31:05.731672   80934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:31:05.827042   80934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:31:05.827114   80934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:31:05.831425   80934 start.go:563] Will wait 60s for crictl version
	I1105 19:31:05.831511   80934 ssh_runner.go:195] Run: which crictl
	I1105 19:31:05.835036   80934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:31:05.877673   80934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:31:05.877744   80934 ssh_runner.go:195] Run: crio --version
	I1105 19:31:05.905751   80934 ssh_runner.go:195] Run: crio --version
	I1105 19:31:05.934770   80934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:31:05.935992   80934 main.go:141] libmachine: (newest-cni-886087) Calling .GetIP
	I1105 19:31:05.938801   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:05.939297   80934 main.go:141] libmachine: (newest-cni-886087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:46:5f", ip: ""} in network mk-newest-cni-886087: {Iface:virbr3 ExpiryTime:2024-11-05 20:30:53 +0000 UTC Type:0 Mac:52:54:00:c0:46:5f Iaid: IPaddr:192.168.61.217 Prefix:24 Hostname:newest-cni-886087 Clientid:01:52:54:00:c0:46:5f}
	I1105 19:31:05.939348   80934 main.go:141] libmachine: (newest-cni-886087) DBG | domain newest-cni-886087 has defined IP address 192.168.61.217 and MAC address 52:54:00:c0:46:5f in network mk-newest-cni-886087
	I1105 19:31:05.939577   80934 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:31:05.943509   80934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:31:05.957575   80934 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1105 19:31:05.958831   80934 kubeadm.go:883] updating cluster {Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:31:05.958992   80934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:31:05.959063   80934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:31:05.990964   80934 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:31:05.991038   80934 ssh_runner.go:195] Run: which lz4
	I1105 19:31:05.994547   80934 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:31:05.998267   80934 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:31:05.998298   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:31:07.235591   80934 crio.go:462] duration metric: took 1.241065979s to copy over tarball
	I1105 19:31:07.235687   80934 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:31:09.325328   80934 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.089608624s)
	I1105 19:31:09.325362   80934 crio.go:469] duration metric: took 2.089732175s to extract the tarball
	I1105 19:31:09.325373   80934 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:31:09.362065   80934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:31:09.404441   80934 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:31:09.404469   80934 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:31:09.404480   80934 kubeadm.go:934] updating node { 192.168.61.217 8443 v1.31.2 crio true true} ...
	I1105 19:31:09.404590   80934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-886087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:31:09.404681   80934 ssh_runner.go:195] Run: crio config
	I1105 19:31:09.450918   80934 cni.go:84] Creating CNI manager for ""
	I1105 19:31:09.450942   80934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:31:09.450950   80934 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1105 19:31:09.450983   80934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.217 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-886087 NodeName:newest-cni-886087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:31:09.451112   80934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-886087"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:31:09.451169   80934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:31:09.461247   80934 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:31:09.461312   80934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:31:09.470199   80934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1105 19:31:09.485954   80934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:31:09.503618   80934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1105 19:31:09.519536   80934 ssh_runner.go:195] Run: grep 192.168.61.217	control-plane.minikube.internal$ /etc/hosts
	I1105 19:31:09.523455   80934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:31:09.535089   80934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:31:09.674521   80934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:31:09.692988   80934 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087 for IP: 192.168.61.217
	I1105 19:31:09.693013   80934 certs.go:194] generating shared ca certs ...
	I1105 19:31:09.693034   80934 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:31:09.693210   80934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:31:09.693250   80934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:31:09.693260   80934 certs.go:256] generating profile certs ...
	I1105 19:31:09.693308   80934 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/client.key
	I1105 19:31:09.693320   80934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/client.crt with IP's: []
	I1105 19:31:09.754445   80934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/client.crt ...
	I1105 19:31:09.754471   80934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/client.crt: {Name:mk07325d216a62cab116be962d5879f962552199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:31:09.754650   80934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/client.key ...
	I1105 19:31:09.754660   80934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/client.key: {Name:mk6905722052a9dbbd6166b9f22ba7ea71365166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:31:09.754735   80934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key.141acc84
	I1105 19:31:09.754750   80934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.crt.141acc84 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.217]
	I1105 19:31:09.904447   80934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.crt.141acc84 ...
	I1105 19:31:09.904476   80934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.crt.141acc84: {Name:mke34b4c8f5f8c2ce091c5c94f11398a10975c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:31:09.904631   80934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key.141acc84 ...
	I1105 19:31:09.904651   80934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key.141acc84: {Name:mkb52a5c3cc5631ac0cd9d31da8524f2d865b2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:31:09.904720   80934 certs.go:381] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.crt.141acc84 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.crt
	I1105 19:31:09.904810   80934 certs.go:385] copying /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key.141acc84 -> /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key
	I1105 19:31:09.904867   80934 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.key
	I1105 19:31:09.904885   80934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.crt with IP's: []
	I1105 19:31:09.994382   80934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.crt ...
	I1105 19:31:09.994411   80934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.crt: {Name:mkbb9991cc3fd583539c908336a36de8a80fca04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:31:09.994562   80934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.key ...
	I1105 19:31:09.994574   80934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.key: {Name:mk6ae3e1ccb5880dcac0a20f563f69e7946ea514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:31:09.994741   80934 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:31:09.994777   80934 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:31:09.994788   80934 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:31:09.994812   80934 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:31:09.994834   80934 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:31:09.994855   80934 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:31:09.994893   80934 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:31:09.995433   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:31:10.020288   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:31:10.043787   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:31:10.067316   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:31:10.091365   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 19:31:10.113912   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:31:10.136738   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:31:10.160652   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/newest-cni-886087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:31:10.183087   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:31:10.205305   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:31:10.227107   80934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:31:10.250473   80934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:31:10.266995   80934 ssh_runner.go:195] Run: openssl version
	I1105 19:31:10.272705   80934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:31:10.283489   80934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:31:10.288008   80934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:31:10.288073   80934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:31:10.293550   80934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:31:10.303595   80934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:31:10.314109   80934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:31:10.318568   80934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:31:10.318628   80934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:31:10.324080   80934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:31:10.334491   80934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:31:10.345121   80934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:31:10.349134   80934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:31:10.349188   80934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:31:10.354607   80934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:31:10.365008   80934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:31:10.368912   80934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 19:31:10.368985   80934 kubeadm.go:392] StartCluster: {Name:newest-cni-886087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-886087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:31:10.369080   80934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:31:10.369121   80934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:31:10.410731   80934 cri.go:89] found id: ""
	I1105 19:31:10.410814   80934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:31:10.420728   80934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:31:10.429802   80934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:31:10.441442   80934 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:31:10.441462   80934 kubeadm.go:157] found existing configuration files:
	
	I1105 19:31:10.441511   80934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:31:10.451524   80934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:31:10.451593   80934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:31:10.462108   80934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:31:10.472975   80934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:31:10.473038   80934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:31:10.484454   80934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:31:10.495005   80934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:31:10.495085   80934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:31:10.505382   80934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:31:10.515589   80934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:31:10.515656   80934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:31:10.525327   80934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:31:10.743434   80934 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.286305379Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835079286272208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2185988-2d63-4ceb-a35c-f2dfc9df8e23 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.287097747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fbecedc-a30e-4a8b-b4f5-09a1023a1bc3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.287248408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fbecedc-a30e-4a8b-b4f5-09a1023a1bc3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.287685659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab,PodSandboxId:53d95ad8175d2c3e2a0547d1e54ab7d716d92f9f6bb34d3b393fbf1e44fc3dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218398362023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xx9wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17910730-8b50-4223-8af5-82b701aa2f96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af,PodSandboxId:9c68653e627573ac6486fdd226956920611b4faf77bc00b25cbb0e4c704fe203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218148563926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gl9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bee65a6-f684-4675-b356-62602fa628c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d,PodSandboxId:c565fa80a6aaf317ad0a1e4a15b4dd21f57b5d04f455a10bcfc366451de4d05d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1730834217463475970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4743de2f-37ed-4b92-ac4e-4bcbff5897b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a,PodSandboxId:2e59b18e4713ed733f5c8b56a24b6afdd6659fd83fd02f8790941a1a64001db9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730834217239521206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txq44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4a537b-e4cc-4254-9a22-679795366362,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5,PodSandboxId:4ee8c4b268f91471c4186d36d454da0207df96223ef74f008b0f172b6965f7da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834206622064588,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5b9e61ccfc5846d0b9bbd773dc071,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795,PodSandboxId:2722b4838dace6612ede6aacfd690bfa3ad6ea7383a0a4ae5436bb7f0b82ce1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173083420657764
6503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec,PodSandboxId:769f2d218ba80fd7d1999b1f5008c9e15b825a554d76b09f545800c6fbfc4fdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834206543391485,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcfc5f9c14a629c1363a718710ab4809,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda,PodSandboxId:81812c8fa67882adaf70636f9e0601298b63deb80ec077a0c3d97f57bfd56719,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834206540810236,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e114f84917815ecea095e683e62042c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53,PodSandboxId:dcd5be362a6c5770f7d6fe56e370839847e1dce1b092bbbd3c55b5162b656551,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833921559164343,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fbecedc-a30e-4a8b-b4f5-09a1023a1bc3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.323498016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da15d1f8-0dad-409b-a280-b003d253f971 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.323604600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da15d1f8-0dad-409b-a280-b003d253f971 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.324870572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0f5d813-1d15-4768-8eb6-15d4d80b5f2b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.325212535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835079325192128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0f5d813-1d15-4768-8eb6-15d4d80b5f2b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.325664152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc010cfa-331a-4a5b-8308-17dd404e7920 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.325787141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc010cfa-331a-4a5b-8308-17dd404e7920 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.326000883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab,PodSandboxId:53d95ad8175d2c3e2a0547d1e54ab7d716d92f9f6bb34d3b393fbf1e44fc3dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218398362023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xx9wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17910730-8b50-4223-8af5-82b701aa2f96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af,PodSandboxId:9c68653e627573ac6486fdd226956920611b4faf77bc00b25cbb0e4c704fe203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218148563926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gl9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bee65a6-f684-4675-b356-62602fa628c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d,PodSandboxId:c565fa80a6aaf317ad0a1e4a15b4dd21f57b5d04f455a10bcfc366451de4d05d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1730834217463475970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4743de2f-37ed-4b92-ac4e-4bcbff5897b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a,PodSandboxId:2e59b18e4713ed733f5c8b56a24b6afdd6659fd83fd02f8790941a1a64001db9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730834217239521206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txq44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4a537b-e4cc-4254-9a22-679795366362,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5,PodSandboxId:4ee8c4b268f91471c4186d36d454da0207df96223ef74f008b0f172b6965f7da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834206622064588,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5b9e61ccfc5846d0b9bbd773dc071,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795,PodSandboxId:2722b4838dace6612ede6aacfd690bfa3ad6ea7383a0a4ae5436bb7f0b82ce1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173083420657764
6503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec,PodSandboxId:769f2d218ba80fd7d1999b1f5008c9e15b825a554d76b09f545800c6fbfc4fdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834206543391485,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcfc5f9c14a629c1363a718710ab4809,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda,PodSandboxId:81812c8fa67882adaf70636f9e0601298b63deb80ec077a0c3d97f57bfd56719,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834206540810236,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e114f84917815ecea095e683e62042c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53,PodSandboxId:dcd5be362a6c5770f7d6fe56e370839847e1dce1b092bbbd3c55b5162b656551,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833921559164343,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc010cfa-331a-4a5b-8308-17dd404e7920 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.362380426Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8f80b66-0431-4f54-8746-50d9f6663523 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.362452619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8f80b66-0431-4f54-8746-50d9f6663523 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.364000014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7d25d3b-44e4-4201-87d3-0ef2599b1ff1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.367403616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835079367373930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7d25d3b-44e4-4201-87d3-0ef2599b1ff1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.368396232Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05d4b56a-5b00-4d39-ab90-847337d77bdc name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.368460948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05d4b56a-5b00-4d39-ab90-847337d77bdc name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.368640236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab,PodSandboxId:53d95ad8175d2c3e2a0547d1e54ab7d716d92f9f6bb34d3b393fbf1e44fc3dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218398362023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xx9wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17910730-8b50-4223-8af5-82b701aa2f96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af,PodSandboxId:9c68653e627573ac6486fdd226956920611b4faf77bc00b25cbb0e4c704fe203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218148563926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gl9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bee65a6-f684-4675-b356-62602fa628c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d,PodSandboxId:c565fa80a6aaf317ad0a1e4a15b4dd21f57b5d04f455a10bcfc366451de4d05d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1730834217463475970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4743de2f-37ed-4b92-ac4e-4bcbff5897b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a,PodSandboxId:2e59b18e4713ed733f5c8b56a24b6afdd6659fd83fd02f8790941a1a64001db9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730834217239521206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txq44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4a537b-e4cc-4254-9a22-679795366362,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5,PodSandboxId:4ee8c4b268f91471c4186d36d454da0207df96223ef74f008b0f172b6965f7da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834206622064588,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5b9e61ccfc5846d0b9bbd773dc071,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795,PodSandboxId:2722b4838dace6612ede6aacfd690bfa3ad6ea7383a0a4ae5436bb7f0b82ce1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173083420657764
6503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec,PodSandboxId:769f2d218ba80fd7d1999b1f5008c9e15b825a554d76b09f545800c6fbfc4fdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834206543391485,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcfc5f9c14a629c1363a718710ab4809,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda,PodSandboxId:81812c8fa67882adaf70636f9e0601298b63deb80ec077a0c3d97f57bfd56719,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834206540810236,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e114f84917815ecea095e683e62042c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53,PodSandboxId:dcd5be362a6c5770f7d6fe56e370839847e1dce1b092bbbd3c55b5162b656551,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833921559164343,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05d4b56a-5b00-4d39-ab90-847337d77bdc name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.399957717Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bee263a4-67c3-4904-a74e-e6eb773b0335 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.400029658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bee263a4-67c3-4904-a74e-e6eb773b0335 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.401139231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d0ff85b-2789-4b10-bb9f-fa5c78147ac3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.401847005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835079401819223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d0ff85b-2789-4b10-bb9f-fa5c78147ac3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.402254301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a34e091-2532-408c-a1a5-199dc95d4235 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.402304360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a34e091-2532-408c-a1a5-199dc95d4235 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:31:19 no-preload-459223 crio[709]: time="2024-11-05 19:31:19.402515100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab,PodSandboxId:53d95ad8175d2c3e2a0547d1e54ab7d716d92f9f6bb34d3b393fbf1e44fc3dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218398362023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xx9wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17910730-8b50-4223-8af5-82b701aa2f96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af,PodSandboxId:9c68653e627573ac6486fdd226956920611b4faf77bc00b25cbb0e4c704fe203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730834218148563926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gl9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9bee65a6-f684-4675-b356-62602fa628c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d,PodSandboxId:c565fa80a6aaf317ad0a1e4a15b4dd21f57b5d04f455a10bcfc366451de4d05d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1730834217463475970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4743de2f-37ed-4b92-ac4e-4bcbff5897b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a,PodSandboxId:2e59b18e4713ed733f5c8b56a24b6afdd6659fd83fd02f8790941a1a64001db9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1730834217239521206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txq44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4a537b-e4cc-4254-9a22-679795366362,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5,PodSandboxId:4ee8c4b268f91471c4186d36d454da0207df96223ef74f008b0f172b6965f7da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730834206622064588,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5b9e61ccfc5846d0b9bbd773dc071,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795,PodSandboxId:2722b4838dace6612ede6aacfd690bfa3ad6ea7383a0a4ae5436bb7f0b82ce1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173083420657764
6503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec,PodSandboxId:769f2d218ba80fd7d1999b1f5008c9e15b825a554d76b09f545800c6fbfc4fdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730834206543391485,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcfc5f9c14a629c1363a718710ab4809,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda,PodSandboxId:81812c8fa67882adaf70636f9e0601298b63deb80ec077a0c3d97f57bfd56719,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730834206540810236,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e114f84917815ecea095e683e62042c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53,PodSandboxId:dcd5be362a6c5770f7d6fe56e370839847e1dce1b092bbbd3c55b5162b656551,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730833921559164343,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-459223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94206c398038902addfa6a59f19fc698,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a34e091-2532-408c-a1a5-199dc95d4235 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8299ec71cd6b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   53d95ad8175d2       coredns-7c65d6cfc9-xx9wl
	06944d69e896b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   9c68653e62757       coredns-7c65d6cfc9-gl9th
	a9107fec3c6ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   c565fa80a6aaf       storage-provisioner
	fef03f0dffe73       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   14 minutes ago      Running             kube-proxy                0                   2e59b18e4713e       kube-proxy-txq44
	e0e6f9312034b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   14 minutes ago      Running             kube-controller-manager   2                   4ee8c4b268f91       kube-controller-manager-no-preload-459223
	e508df75b1e52       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Running             kube-apiserver            2                   2722b4838dace       kube-apiserver-no-preload-459223
	23716e18606f9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   14 minutes ago      Running             kube-scheduler            2                   769f2d218ba80       kube-scheduler-no-preload-459223
	fe5cad52df568       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   81812c8fa6788       etcd-no-preload-459223
	19f1612ca8def       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   19 minutes ago      Exited              kube-apiserver            1                   dcd5be362a6c5       kube-apiserver-no-preload-459223
	
	
	==> coredns [06944d69e896b5ef27f9a81f945959fc36fd89a38c089b3c9017755e637d10af] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a8299ec71cd6b37e72fdd3f8627cf867a0908901537ba55d84171942fed764ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-459223
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-459223
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=no-preload-459223
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T19_16_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 19:16:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-459223
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 19:31:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 19:27:12 +0000   Tue, 05 Nov 2024 19:16:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 19:27:12 +0000   Tue, 05 Nov 2024 19:16:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 19:27:12 +0000   Tue, 05 Nov 2024 19:16:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 19:27:12 +0000   Tue, 05 Nov 2024 19:16:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.101
	  Hostname:    no-preload-459223
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1674e32c04b493ead7da91f37718f8a
	  System UUID:                b1674e32-c04b-493e-ad7d-a91f37718f8a
	  Boot ID:                    a9004ea1-1fbf-4031-a350-a672fb92ac60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gl9th                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-xx9wl                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-459223                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-459223             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-459223    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-txq44                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-459223             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-qbgx4              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-459223 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-459223 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-459223 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-459223 event: Registered Node no-preload-459223 in Controller
	
	
	==> dmesg <==
	[  +0.041727] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.227125] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.936410] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.536117] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.310713] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.060096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058668] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.185471] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.124899] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.280956] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[ +15.404763] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.059391] kauditd_printk_skb: 130 callbacks suppressed
	[Nov 5 19:12] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +4.014361] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.347496] kauditd_printk_skb: 55 callbacks suppressed
	[  +6.217564] kauditd_printk_skb: 25 callbacks suppressed
	[Nov 5 19:16] systemd-fstab-generator[3091]: Ignoring "noauto" option for root device
	[  +0.061361] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.000648] systemd-fstab-generator[3407]: Ignoring "noauto" option for root device
	[  +0.081673] kauditd_printk_skb: 52 callbacks suppressed
	[  +4.333170] systemd-fstab-generator[3526]: Ignoring "noauto" option for root device
	[  +1.183763] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 5 19:17] kauditd_printk_skb: 66 callbacks suppressed
	
	
	==> etcd [fe5cad52df568a53cd03c490f9d7f1f2b81f1a59e77408ac0054df3d5b979fda] <==
	{"level":"info","ts":"2024-11-05T19:16:47.637817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 is starting a new election at term 1"}
	{"level":"info","ts":"2024-11-05T19:16:47.637925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-11-05T19:16:47.637971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 received MsgPreVoteResp from a006cd7aeaf5eb83 at term 1"}
	{"level":"info","ts":"2024-11-05T19:16:47.638022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 became candidate at term 2"}
	{"level":"info","ts":"2024-11-05T19:16:47.638052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 received MsgVoteResp from a006cd7aeaf5eb83 at term 2"}
	{"level":"info","ts":"2024-11-05T19:16:47.638121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a006cd7aeaf5eb83 became leader at term 2"}
	{"level":"info","ts":"2024-11-05T19:16:47.638155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a006cd7aeaf5eb83 elected leader a006cd7aeaf5eb83 at term 2"}
	{"level":"info","ts":"2024-11-05T19:16:47.642911Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:16:47.646969Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a006cd7aeaf5eb83","local-member-attributes":"{Name:no-preload-459223 ClientURLs:[https://192.168.72.101:2379]}","request-path":"/0/members/a006cd7aeaf5eb83/attributes","cluster-id":"9dd5856f1db18b5a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T19:16:47.647128Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:16:47.647257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:16:47.648200Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:16:47.648996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-05T19:16:47.649054Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T19:16:47.649086Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-05T19:16:47.649564Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:16:47.655636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.101:2379"}
	{"level":"info","ts":"2024-11-05T19:16:47.658819Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9dd5856f1db18b5a","local-member-id":"a006cd7aeaf5eb83","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:16:47.704449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:16:47.714856Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:26:47.686671Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":685}
	{"level":"info","ts":"2024-11-05T19:26:47.696517Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":685,"took":"9.092579ms","hash":1509781921,"current-db-size-bytes":2150400,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2150400,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-11-05T19:26:47.696667Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1509781921,"revision":685,"compact-revision":-1}
	{"level":"info","ts":"2024-11-05T19:31:12.306217Z","caller":"traceutil/trace.go:171","msg":"trace[1873984684] transaction","detail":"{read_only:false; response_revision:1144; number_of_response:1; }","duration":"307.578108ms","start":"2024-11-05T19:31:11.998585Z","end":"2024-11-05T19:31:12.306163Z","steps":["trace[1873984684] 'process raft request'  (duration: 307.450206ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T19:31:12.307711Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T19:31:11.998563Z","time spent":"308.269071ms","remote":"127.0.0.1:45608","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1142 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 19:31:19 up 19 min,  0 users,  load average: 0.11, 0.18, 0.17
	Linux no-preload-459223 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [19f1612ca8def97f2ddeec062f2465a352fb5a78e089b7fd57810688e9364a53] <==
	W1105 19:16:41.803272       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:41.809024       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:41.809042       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.008683       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.034392       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.072804       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.128600       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.129861       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.154429       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.159117       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.171923       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.183306       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.244880       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.260464       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.260547       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.281594       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.288508       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.344219       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.347974       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.452160       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.460027       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.572281       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.671534       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.720454       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1105 19:16:42.816836       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e508df75b1e522dae3562d239ffb38a475532444100740b837b6d380e746f795] <==
	W1105 19:26:50.102122       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:26:50.102242       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:26:50.103201       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:26:50.104353       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:27:50.103778       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:27:50.103872       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1105 19:27:50.104829       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:27:50.104949       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:27:50.104957       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:27:50.107076       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1105 19:29:50.106170       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:29:50.106548       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1105 19:29:50.107824       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 19:29:50.107963       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 19:29:50.108013       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 19:29:50.109104       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e0e6f9312034b86f663d0d18ed2426d2fd85376a9f840ce822a3fb7445b5f1c5] <==
	E1105 19:25:56.154825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:25:56.638448       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:26:26.160518       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:26:26.645930       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:26:56.168065       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:26:56.654960       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:27:12.649011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-459223"
	E1105 19:27:26.175457       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:27:26.663618       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:27:56.182658       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:27:56.672405       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1105 19:28:04.862618       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="219.279µs"
	I1105 19:28:15.864508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="101.21µs"
	E1105 19:28:26.189342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:28:26.683542       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:28:56.196319       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:28:56.692479       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:29:26.204053       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:29:26.700948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:29:56.210318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:29:56.708788       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:30:26.216930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:30:26.715392       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1105 19:30:56.224117       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1105 19:30:56.723097       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fef03f0dffe735694ee6cd8eafee54ec715a04ce59d31f3b21f10c6934a2ad5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 19:16:57.580648       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 19:16:57.592492       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.101"]
	E1105 19:16:57.592574       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 19:16:57.641919       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 19:16:57.641979       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 19:16:57.642018       1 server_linux.go:169] "Using iptables Proxier"
	I1105 19:16:57.644364       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 19:16:57.644656       1 server.go:483] "Version info" version="v1.31.2"
	I1105 19:16:57.644682       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:16:57.647555       1 config.go:199] "Starting service config controller"
	I1105 19:16:57.647603       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 19:16:57.647634       1 config.go:105] "Starting endpoint slice config controller"
	I1105 19:16:57.647658       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 19:16:57.648223       1 config.go:328] "Starting node config controller"
	I1105 19:16:57.648253       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 19:16:57.748005       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 19:16:57.748091       1 shared_informer.go:320] Caches are synced for service config
	I1105 19:16:57.748674       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23716e18606f954d46967ca6e39d23b51642480e48084a57c9e1766d69c9d2ec] <==
	W1105 19:16:49.958927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 19:16:49.958963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:49.988674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 19:16:49.988778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.048725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1105 19:16:50.048819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.059647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 19:16:50.059856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.074308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 19:16:50.074383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.090358       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 19:16:50.090439       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1105 19:16:50.197143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 19:16:50.197203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.204798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1105 19:16:50.204843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.204891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 19:16:50.204913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.280014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1105 19:16:50.280063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.290021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 19:16:50.290068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 19:16:50.290509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1105 19:16:50.290581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1105 19:16:53.219907       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 19:30:02 no-preload-459223 kubelet[3414]: E1105 19:30:02.074035    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835002073716945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:12 no-preload-459223 kubelet[3414]: E1105 19:30:12.075392    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835012075142335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:12 no-preload-459223 kubelet[3414]: E1105 19:30:12.075431    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835012075142335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:14 no-preload-459223 kubelet[3414]: E1105 19:30:14.846987    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:30:22 no-preload-459223 kubelet[3414]: E1105 19:30:22.077185    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835022076908867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:22 no-preload-459223 kubelet[3414]: E1105 19:30:22.077479    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835022076908867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:27 no-preload-459223 kubelet[3414]: E1105 19:30:27.846849    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:30:32 no-preload-459223 kubelet[3414]: E1105 19:30:32.079296    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835032079023330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:32 no-preload-459223 kubelet[3414]: E1105 19:30:32.079344    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835032079023330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:40 no-preload-459223 kubelet[3414]: E1105 19:30:40.846777    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:30:42 no-preload-459223 kubelet[3414]: E1105 19:30:42.080916    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835042080495357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:42 no-preload-459223 kubelet[3414]: E1105 19:30:42.080963    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835042080495357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:51 no-preload-459223 kubelet[3414]: E1105 19:30:51.892211    3414 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 19:30:51 no-preload-459223 kubelet[3414]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 19:30:51 no-preload-459223 kubelet[3414]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 19:30:51 no-preload-459223 kubelet[3414]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 19:30:51 no-preload-459223 kubelet[3414]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 19:30:52 no-preload-459223 kubelet[3414]: E1105 19:30:52.082774    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835052082504323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:52 no-preload-459223 kubelet[3414]: E1105 19:30:52.082923    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835052082504323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:30:55 no-preload-459223 kubelet[3414]: E1105 19:30:55.850122    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:31:02 no-preload-459223 kubelet[3414]: E1105 19:31:02.085272    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835062084578333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:02 no-preload-459223 kubelet[3414]: E1105 19:31:02.085668    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835062084578333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:08 no-preload-459223 kubelet[3414]: E1105 19:31:08.847483    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qbgx4" podUID="41686f85-3122-40a1-9c77-70ddef66069e"
	Nov 05 19:31:12 no-preload-459223 kubelet[3414]: E1105 19:31:12.087598    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835072087127850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 19:31:12 no-preload-459223 kubelet[3414]: E1105 19:31:12.087643    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835072087127850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [a9107fec3c6ecbc5e58a7976263a73d457d79bf3c21ee4a6be5a5311b365111d] <==
	I1105 19:16:57.643089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 19:16:57.665537       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 19:16:57.665704       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 19:16:57.674070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 19:16:57.674300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-459223_3a9bccea-688e-41f3-9501-f401ac215d00!
	I1105 19:16:57.674502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0cd88a65-6c4d-438c-9999-065e0d08e692", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-459223_3a9bccea-688e-41f3-9501-f401ac215d00 became leader
	I1105 19:16:57.774498       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-459223_3a9bccea-688e-41f3-9501-f401ac215d00!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-459223 -n no-preload-459223
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-459223 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-qbgx4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-459223 describe pod metrics-server-6867b74b74-qbgx4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-459223 describe pod metrics-server-6867b74b74-qbgx4: exit status 1 (69.091783ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-qbgx4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-459223 describe pod metrics-server-6867b74b74-qbgx4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (314.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (117.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:29:06.921400   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:29:50.265731   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:30:05.695818   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.125:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.125:8443: connect: connection refused
E1105 19:30:34.494080   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 2 (230.907577ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-567666" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-567666 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-567666 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.371µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-567666 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 2 (223.860092ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-567666 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-567666 logs -n 25: (1.55247929s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-929548 sudo cat                              | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo                                  | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo find                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-929548 sudo crio                             | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-929548                                       | bridge-929548                | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-537175 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:02 UTC |
	|         | disable-driver-mounts-537175                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:02 UTC | 05 Nov 24 19:04 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-459223             | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-271881            | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC | 05 Nov 24 19:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-608095  | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC | 05 Nov 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-459223                  | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-271881                 | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-459223                                   | no-preload-459223            | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-567666        | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-271881                                  | embed-certs-271881           | jenkins | v1.34.0 | 05 Nov 24 19:05 UTC | 05 Nov 24 19:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-608095       | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-608095 | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:15 UTC |
	|         | default-k8s-diff-port-608095                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-567666             | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC | 05 Nov 24 19:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-567666                              | old-k8s-version-567666       | jenkins | v1.34.0 | 05 Nov 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 19:07:52.649090   74485 out.go:345] Setting OutFile to fd 1 ...
	I1105 19:07:52.649200   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649205   74485 out.go:358] Setting ErrFile to fd 2...
	I1105 19:07:52.649210   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 19:07:52.649374   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 19:07:52.649909   74485 out.go:352] Setting JSON to false
	I1105 19:07:52.650785   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6615,"bootTime":1730827058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 19:07:52.650878   74485 start.go:139] virtualization: kvm guest
	I1105 19:07:52.652866   74485 out.go:177] * [old-k8s-version-567666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 19:07:52.654107   74485 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 19:07:52.654107   74485 notify.go:220] Checking for updates...
	I1105 19:07:52.655282   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 19:07:52.656379   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:07:52.657451   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 19:07:52.658694   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 19:07:52.659835   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 19:07:52.661251   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:07:52.661622   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.661660   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.677005   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I1105 19:07:52.677521   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.678096   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.678118   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.678489   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.678735   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.680466   74485 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1105 19:07:52.681734   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 19:07:52.682087   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:07:52.682139   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:07:52.697071   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1105 19:07:52.697503   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:07:52.697958   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:07:52.697980   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:07:52.698259   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:07:52.698439   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:07:52.732962   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 19:07:52.734079   74485 start.go:297] selected driver: kvm2
	I1105 19:07:52.734094   74485 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.734209   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 19:07:52.734912   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.735038   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 19:07:52.750214   74485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 19:07:52.750609   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:07:52.750641   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:07:52.750697   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:07:52.750745   74485 start.go:340] cluster config:
	{Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:07:52.750877   74485 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 19:07:52.753288   74485 out.go:177] * Starting "old-k8s-version-567666" primary control-plane node in "old-k8s-version-567666" cluster
	I1105 19:07:50.739209   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:53.811246   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:07:52.754354   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:07:52.754407   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 19:07:52.754425   74485 cache.go:56] Caching tarball of preloaded images
	I1105 19:07:52.754503   74485 preload.go:172] Found /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 19:07:52.754515   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 19:07:52.754610   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:07:52.754817   74485 start.go:360] acquireMachinesLock for old-k8s-version-567666: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:07:59.891257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:02.963247   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:09.043263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:12.115289   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:18.195275   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:21.267213   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:27.347251   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:30.419240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:36.499291   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:39.571255   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:45.651258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:48.723262   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:54.803265   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:08:57.875236   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:03.955241   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:07.027229   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:13.107258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:16.179257   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:22.259227   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:25.331263   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:31.411234   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:34.483240   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:40.563258   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:43.635253   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:49.715287   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:52.787276   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:09:58.867242   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:01.939296   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:08.019268   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:11.091350   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:17.171266   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:20.243245   73496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.101:22: connect: no route to host
	I1105 19:10:23.247511   73732 start.go:364] duration metric: took 4m30.277290481s to acquireMachinesLock for "embed-certs-271881"
	I1105 19:10:23.247565   73732 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:23.247590   73732 fix.go:54] fixHost starting: 
	I1105 19:10:23.248173   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:23.248235   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:23.263573   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I1105 19:10:23.264016   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:23.264437   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:10:23.264461   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:23.264888   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:23.265122   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:23.265311   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:10:23.267000   73732 fix.go:112] recreateIfNeeded on embed-certs-271881: state=Stopped err=<nil>
	I1105 19:10:23.267031   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	W1105 19:10:23.267183   73732 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:23.269188   73732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-271881" ...
	I1105 19:10:23.244961   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:23.245021   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245327   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:10:23.245352   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:10:23.245536   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:10:23.247352   73496 machine.go:96] duration metric: took 4m37.425023044s to provisionDockerMachine
	I1105 19:10:23.247393   73496 fix.go:56] duration metric: took 4m37.446801616s for fixHost
	I1105 19:10:23.247400   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 4m37.446835698s
	W1105 19:10:23.247424   73496 start.go:714] error starting host: provision: host is not running
	W1105 19:10:23.247522   73496 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1105 19:10:23.247534   73496 start.go:729] Will try again in 5 seconds ...
	I1105 19:10:23.270443   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Start
	I1105 19:10:23.270681   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring networks are active...
	I1105 19:10:23.271552   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network default is active
	I1105 19:10:23.271924   73732 main.go:141] libmachine: (embed-certs-271881) Ensuring network mk-embed-certs-271881 is active
	I1105 19:10:23.272243   73732 main.go:141] libmachine: (embed-certs-271881) Getting domain xml...
	I1105 19:10:23.273027   73732 main.go:141] libmachine: (embed-certs-271881) Creating domain...
	I1105 19:10:24.503219   73732 main.go:141] libmachine: (embed-certs-271881) Waiting to get IP...
	I1105 19:10:24.504067   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.504444   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.504503   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.504415   75020 retry.go:31] will retry after 194.539819ms: waiting for machine to come up
	I1105 19:10:24.701086   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:24.701552   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:24.701579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:24.701501   75020 retry.go:31] will retry after 361.371677ms: waiting for machine to come up
	I1105 19:10:25.064078   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.064484   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.064512   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.064433   75020 retry.go:31] will retry after 442.206433ms: waiting for machine to come up
	I1105 19:10:25.507981   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:25.508380   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:25.508405   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:25.508338   75020 retry.go:31] will retry after 573.453662ms: waiting for machine to come up
	I1105 19:10:26.083299   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.083727   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.083753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.083670   75020 retry.go:31] will retry after 686.210957ms: waiting for machine to come up
	I1105 19:10:26.771637   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:26.772070   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:26.772112   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:26.772062   75020 retry.go:31] will retry after 685.825223ms: waiting for machine to come up
	I1105 19:10:27.459230   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:27.459652   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:27.459677   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:27.459616   75020 retry.go:31] will retry after 1.167971852s: waiting for machine to come up
	I1105 19:10:28.247729   73496 start.go:360] acquireMachinesLock for no-preload-459223: {Name:mka1d4c5591441593c5e29459ef6950ded9600fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 19:10:28.629194   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:28.629526   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:28.629549   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:28.629488   75020 retry.go:31] will retry after 1.180980288s: waiting for machine to come up
	I1105 19:10:29.812048   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:29.812445   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:29.812475   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:29.812390   75020 retry.go:31] will retry after 1.527253183s: waiting for machine to come up
	I1105 19:10:31.342147   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:31.342519   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:31.342546   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:31.342467   75020 retry.go:31] will retry after 1.597485878s: waiting for machine to come up
	I1105 19:10:32.942141   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:32.942459   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:32.942505   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:32.942431   75020 retry.go:31] will retry after 2.416793509s: waiting for machine to come up
	I1105 19:10:35.360354   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:35.360717   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:35.360743   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:35.360674   75020 retry.go:31] will retry after 3.193637492s: waiting for machine to come up
	I1105 19:10:38.556294   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:38.556744   73732 main.go:141] libmachine: (embed-certs-271881) DBG | unable to find current IP address of domain embed-certs-271881 in network mk-embed-certs-271881
	I1105 19:10:38.556775   73732 main.go:141] libmachine: (embed-certs-271881) DBG | I1105 19:10:38.556673   75020 retry.go:31] will retry after 3.819760443s: waiting for machine to come up
	I1105 19:10:42.380607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381140   73732 main.go:141] libmachine: (embed-certs-271881) Found IP for machine: 192.168.39.58
	I1105 19:10:42.381172   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has current primary IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.381196   73732 main.go:141] libmachine: (embed-certs-271881) Reserving static IP address...
	I1105 19:10:42.381607   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.381634   73732 main.go:141] libmachine: (embed-certs-271881) Reserved static IP address: 192.168.39.58
	I1105 19:10:42.381647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | skip adding static IP to network mk-embed-certs-271881 - found existing host DHCP lease matching {name: "embed-certs-271881", mac: "52:54:00:df:3c:9f", ip: "192.168.39.58"}
	I1105 19:10:42.381671   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Getting to WaitForSSH function...
	I1105 19:10:42.381686   73732 main.go:141] libmachine: (embed-certs-271881) Waiting for SSH to be available...
	I1105 19:10:42.383908   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384306   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.384333   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.384427   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH client type: external
	I1105 19:10:42.384458   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa (-rw-------)
	I1105 19:10:42.384486   73732 main.go:141] libmachine: (embed-certs-271881) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:10:42.384502   73732 main.go:141] libmachine: (embed-certs-271881) DBG | About to run SSH command:
	I1105 19:10:42.384510   73732 main.go:141] libmachine: (embed-certs-271881) DBG | exit 0
	I1105 19:10:42.506807   73732 main.go:141] libmachine: (embed-certs-271881) DBG | SSH cmd err, output: <nil>: 
	I1105 19:10:42.507217   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetConfigRaw
	I1105 19:10:42.507868   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.510314   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510647   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.510680   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.510936   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/config.json ...
	I1105 19:10:42.511183   73732 machine.go:93] provisionDockerMachine start ...
	I1105 19:10:42.511203   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:42.511426   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.513759   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514111   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.514144   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.514290   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.514473   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514654   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.514827   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.514979   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.515191   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.515202   73732 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:10:42.619241   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:10:42.619273   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619517   73732 buildroot.go:166] provisioning hostname "embed-certs-271881"
	I1105 19:10:42.619555   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.619735   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.622695   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623117   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.623146   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.623304   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.623465   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623632   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.623825   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.623957   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.624122   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.624135   73732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-271881 && echo "embed-certs-271881" | sudo tee /etc/hostname
	I1105 19:10:42.740722   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-271881
	
	I1105 19:10:42.740749   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.743579   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.743922   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.743945   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.744160   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:42.744343   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744470   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:42.744617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:42.744756   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:42.744950   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:42.744972   73732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-271881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-271881/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-271881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:10:42.854869   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:10:42.854898   73732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:10:42.854926   73732 buildroot.go:174] setting up certificates
	I1105 19:10:42.854940   73732 provision.go:84] configureAuth start
	I1105 19:10:42.854948   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetMachineName
	I1105 19:10:42.855222   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:42.857913   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858228   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.858252   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.858440   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:42.860753   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861041   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:42.861062   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:42.861222   73732 provision.go:143] copyHostCerts
	I1105 19:10:42.861274   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:10:42.861291   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:10:42.861385   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:10:42.861543   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:10:42.861556   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:10:42.861595   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:10:42.861671   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:10:42.861681   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:10:42.861713   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:10:42.861781   73732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.embed-certs-271881 san=[127.0.0.1 192.168.39.58 embed-certs-271881 localhost minikube]
	I1105 19:10:43.659372   74141 start.go:364] duration metric: took 3m39.006624915s to acquireMachinesLock for "default-k8s-diff-port-608095"
	I1105 19:10:43.659450   74141 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:10:43.659458   74141 fix.go:54] fixHost starting: 
	I1105 19:10:43.659814   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:10:43.659874   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:10:43.677604   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I1105 19:10:43.678132   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:10:43.678624   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:10:43.678649   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:10:43.679047   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:10:43.679250   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:10:43.679407   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:10:43.681036   74141 fix.go:112] recreateIfNeeded on default-k8s-diff-port-608095: state=Stopped err=<nil>
	I1105 19:10:43.681063   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	W1105 19:10:43.681194   74141 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:10:43.683110   74141 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-608095" ...
	I1105 19:10:43.684451   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Start
	I1105 19:10:43.684639   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring networks are active...
	I1105 19:10:43.685436   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network default is active
	I1105 19:10:43.685983   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Ensuring network mk-default-k8s-diff-port-608095 is active
	I1105 19:10:43.686398   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Getting domain xml...
	I1105 19:10:43.687143   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Creating domain...
	I1105 19:10:43.044648   73732 provision.go:177] copyRemoteCerts
	I1105 19:10:43.044703   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:10:43.044730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.047120   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047506   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.047538   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.047717   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.047886   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.048037   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.048186   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.129098   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:10:43.154632   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1105 19:10:43.179681   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 19:10:43.205598   73732 provision.go:87] duration metric: took 350.648117ms to configureAuth
	I1105 19:10:43.205622   73732 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:10:43.205822   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:10:43.205900   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.208446   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.208763   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.208799   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.209006   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.209190   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.209489   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.209611   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.209828   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.209850   73732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:10:43.431540   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:10:43.431569   73732 machine.go:96] duration metric: took 920.370689ms to provisionDockerMachine
	I1105 19:10:43.431582   73732 start.go:293] postStartSetup for "embed-certs-271881" (driver="kvm2")
	I1105 19:10:43.431595   73732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:10:43.431617   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.431912   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:10:43.431940   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.434821   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435170   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.435193   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.435338   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.435532   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.435714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.435851   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.517391   73732 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:10:43.521532   73732 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:10:43.521553   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:10:43.521632   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:10:43.521721   73732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:10:43.521839   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:10:43.531045   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:43.556596   73732 start.go:296] duration metric: took 125.000692ms for postStartSetup
	I1105 19:10:43.556634   73732 fix.go:56] duration metric: took 20.309059136s for fixHost
	I1105 19:10:43.556663   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.558888   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559181   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.559220   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.559368   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.559531   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559674   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.559789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.559934   73732 main.go:141] libmachine: Using SSH client type: native
	I1105 19:10:43.560096   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1105 19:10:43.560106   73732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:10:43.659219   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833843.637801657
	
	I1105 19:10:43.659240   73732 fix.go:216] guest clock: 1730833843.637801657
	I1105 19:10:43.659247   73732 fix.go:229] Guest: 2024-11-05 19:10:43.637801657 +0000 UTC Remote: 2024-11-05 19:10:43.556637855 +0000 UTC m=+290.729857868 (delta=81.163802ms)
	I1105 19:10:43.659284   73732 fix.go:200] guest clock delta is within tolerance: 81.163802ms
	I1105 19:10:43.659290   73732 start.go:83] releasing machines lock for "embed-certs-271881", held for 20.411743975s
	I1105 19:10:43.659324   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.659589   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:43.662581   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663025   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.663058   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.663214   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.663907   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:10:43.664017   73732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:10:43.664057   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.664108   73732 ssh_runner.go:195] Run: cat /version.json
	I1105 19:10:43.664131   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:10:43.666998   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667059   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667365   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667395   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667424   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:43.667438   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:43.667543   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667638   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:10:43.667730   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667789   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:10:43.667897   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667968   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:10:43.667996   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.668078   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:10:43.775067   73732 ssh_runner.go:195] Run: systemctl --version
	I1105 19:10:43.780892   73732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:10:43.919564   73732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:10:43.926362   73732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:10:43.926422   73732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:10:43.942359   73732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:10:43.942378   73732 start.go:495] detecting cgroup driver to use...
	I1105 19:10:43.942450   73732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:10:43.964650   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:10:43.980651   73732 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:10:43.980717   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:10:43.993988   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:10:44.007440   73732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:10:44.132040   73732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:10:44.314220   73732 docker.go:233] disabling docker service ...
	I1105 19:10:44.314294   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:10:44.337362   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:10:44.351277   73732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:10:44.485105   73732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:10:44.621596   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:10:44.636254   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:10:44.656530   73732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:10:44.656595   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.667156   73732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:10:44.667237   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.682233   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.692814   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.704688   73732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:10:44.721662   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.738629   73732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.754944   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:10:44.765089   73732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:10:44.774147   73732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:10:44.774210   73732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:10:44.786312   73732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:10:44.795892   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:44.926823   73732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:10:45.022945   73732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:10:45.023042   73732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:10:45.027389   73732 start.go:563] Will wait 60s for crictl version
	I1105 19:10:45.027451   73732 ssh_runner.go:195] Run: which crictl
	I1105 19:10:45.030701   73732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:10:45.067294   73732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:10:45.067410   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.094394   73732 ssh_runner.go:195] Run: crio --version
	I1105 19:10:45.123459   73732 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:10:45.124645   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetIP
	I1105 19:10:45.127396   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.127794   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:10:45.127833   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:10:45.128104   73732 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1105 19:10:45.131923   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:45.143951   73732 kubeadm.go:883] updating cluster {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:10:45.144078   73732 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:10:45.144125   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:45.177770   73732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:10:45.177830   73732 ssh_runner.go:195] Run: which lz4
	I1105 19:10:45.181571   73732 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:10:45.186569   73732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:10:45.186602   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:10:46.442865   73732 crio.go:462] duration metric: took 1.26132812s to copy over tarball
	I1105 19:10:46.442959   73732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:10:44.962206   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting to get IP...
	I1105 19:10:44.963032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963397   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:44.963492   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:44.963380   75165 retry.go:31] will retry after 274.297859ms: waiting for machine to come up
	I1105 19:10:45.239024   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239453   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.239478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.239406   75165 retry.go:31] will retry after 239.892312ms: waiting for machine to come up
	I1105 19:10:45.481036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481584   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.481647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.481569   75165 retry.go:31] will retry after 360.538082ms: waiting for machine to come up
	I1105 19:10:45.844144   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844565   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:45.844596   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:45.844533   75165 retry.go:31] will retry after 387.597088ms: waiting for machine to come up
	I1105 19:10:46.234241   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.234798   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.234738   75165 retry.go:31] will retry after 597.596298ms: waiting for machine to come up
	I1105 19:10:46.833721   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834170   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:46.834200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:46.834142   75165 retry.go:31] will retry after 688.240413ms: waiting for machine to come up
	I1105 19:10:47.523898   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524412   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:47.524442   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:47.524377   75165 retry.go:31] will retry after 826.38207ms: waiting for machine to come up
	I1105 19:10:48.352258   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352787   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:48.352809   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:48.352681   75165 retry.go:31] will retry after 1.381579847s: waiting for machine to come up
	I1105 19:10:48.547186   73732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104175993s)
	I1105 19:10:48.547221   73732 crio.go:469] duration metric: took 2.104326973s to extract the tarball
	I1105 19:10:48.547231   73732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:10:48.583027   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:10:48.630180   73732 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:10:48.630208   73732 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:10:48.630218   73732 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.31.2 crio true true} ...
	I1105 19:10:48.630349   73732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-271881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:10:48.630412   73732 ssh_runner.go:195] Run: crio config
	I1105 19:10:48.682182   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:48.682204   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:48.682213   73732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:10:48.682232   73732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-271881 NodeName:embed-certs-271881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:10:48.682354   73732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-271881"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:10:48.682412   73732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:10:48.691968   73732 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:10:48.692031   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:10:48.700980   73732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:10:48.716797   73732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:10:48.732408   73732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1105 19:10:48.748354   73732 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1105 19:10:48.751791   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:10:48.763068   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:10:48.893747   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:10:48.910247   73732 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881 for IP: 192.168.39.58
	I1105 19:10:48.910270   73732 certs.go:194] generating shared ca certs ...
	I1105 19:10:48.910303   73732 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:10:48.910488   73732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:10:48.910547   73732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:10:48.910561   73732 certs.go:256] generating profile certs ...
	I1105 19:10:48.910673   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/client.key
	I1105 19:10:48.910768   73732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key.0a454894
	I1105 19:10:48.910837   73732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key
	I1105 19:10:48.911021   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:10:48.911059   73732 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:10:48.911071   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:10:48.911116   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:10:48.911160   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:10:48.911196   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:10:48.911265   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:10:48.912104   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:10:48.969066   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:10:49.000713   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:10:49.040367   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:10:49.068456   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1105 19:10:49.094166   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:10:49.115986   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:10:49.137770   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/embed-certs-271881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:10:49.161140   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:10:49.182996   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:10:49.206578   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:10:49.230006   73732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:10:49.245835   73732 ssh_runner.go:195] Run: openssl version
	I1105 19:10:49.251252   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:10:49.261237   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265318   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.265398   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:10:49.270753   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:10:49.280568   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:10:49.290580   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294567   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.294644   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:10:49.299812   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:10:49.309398   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:10:49.319451   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323490   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.323543   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:10:49.328708   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:10:49.338805   73732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:10:49.342918   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:10:49.348526   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:10:49.353943   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:10:49.359527   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:10:49.364886   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:10:49.370119   73732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:10:49.375437   73732 kubeadm.go:392] StartCluster: {Name:embed-certs-271881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-271881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:10:49.375531   73732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:10:49.375572   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.415844   73732 cri.go:89] found id: ""
	I1105 19:10:49.415916   73732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:10:49.425336   73732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:10:49.425402   73732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:10:49.425474   73732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:10:49.434717   73732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:10:49.435831   73732 kubeconfig.go:125] found "embed-certs-271881" server: "https://192.168.39.58:8443"
	I1105 19:10:49.437903   73732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:10:49.446625   73732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I1105 19:10:49.446657   73732 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:10:49.446668   73732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:10:49.446732   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:10:49.479546   73732 cri.go:89] found id: ""
	I1105 19:10:49.479639   73732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:10:49.499034   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:10:49.510134   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:10:49.510159   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:10:49.510203   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:10:49.520482   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:10:49.520544   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:10:49.530750   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:10:49.539113   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:10:49.539183   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:10:49.548104   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.556754   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:10:49.556811   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:10:49.565606   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:10:49.574023   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:10:49.574091   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:10:49.582888   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:10:49.591876   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:49.688517   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.070191   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.38163928s)
	I1105 19:10:51.070240   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.267774   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.329051   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:51.406120   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:10:51.406226   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:51.907080   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:52.406468   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:49.735558   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735923   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:49.735987   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:49.735914   75165 retry.go:31] will retry after 1.132319443s: waiting for machine to come up
	I1105 19:10:50.870267   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870770   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:50.870801   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:50.870715   75165 retry.go:31] will retry after 1.791598796s: waiting for machine to come up
	I1105 19:10:52.664538   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:52.665055   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:52.664912   75165 retry.go:31] will retry after 1.910294965s: waiting for machine to come up
	I1105 19:10:52.907103   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.407319   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:10:53.421763   73732 api_server.go:72] duration metric: took 2.015640262s to wait for apiserver process to appear ...
	I1105 19:10:53.421794   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:10:53.421816   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.752768   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.752803   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.752819   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.772365   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:10:55.772412   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:10:55.922705   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:55.928293   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:55.928329   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.422875   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.430633   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.430667   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:56.922156   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:56.934958   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:10:56.935016   73732 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:10:57.422646   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:10:57.428784   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:10:57.435298   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:10:57.435319   73732 api_server.go:131] duration metric: took 4.013519207s to wait for apiserver health ...
	I1105 19:10:57.435327   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:10:57.435333   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:10:57.437061   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:10:57.438374   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:10:57.448509   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:10:57.465994   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:10:57.474649   73732 system_pods.go:59] 8 kube-system pods found
	I1105 19:10:57.474682   73732 system_pods.go:61] "coredns-7c65d6cfc9-nwzpq" [be8aa054-3f68-4c19-bae3-9d9cfcb51869] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:10:57.474691   73732 system_pods.go:61] "etcd-embed-certs-271881" [c37c829b-1dca-4659-b24c-4559304d9fe0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:10:57.474703   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [6df78e2a-1360-4c4b-b451-c96aa60f24ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:10:57.474710   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [95a6baca-c246-4043-acbc-235b076a89b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:10:57.474723   73732 system_pods.go:61] "kube-proxy-f945s" [2cb835f0-3727-4dd1-bd21-a21554ffdc0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 19:10:57.474730   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [53e044c5-199c-46f4-b3db-d3b65a8203aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:10:57.474741   73732 system_pods.go:61] "metrics-server-6867b74b74-vw2sm" [403d0c5f-d870-4f89-8caa-f5e9c8bf9ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:10:57.474748   73732 system_pods.go:61] "storage-provisioner" [13a89bf9-fb97-413a-9948-1c69780784cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 19:10:57.474758   73732 system_pods.go:74] duration metric: took 8.737357ms to wait for pod list to return data ...
	I1105 19:10:57.474769   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:10:57.480599   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:10:57.480623   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:10:57.480634   73732 node_conditions.go:105] duration metric: took 5.857622ms to run NodePressure ...
	I1105 19:10:57.480651   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:10:54.577390   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577939   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:54.577969   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:54.577885   75165 retry.go:31] will retry after 3.393120773s: waiting for machine to come up
	I1105 19:10:57.971960   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | unable to find current IP address of domain default-k8s-diff-port-608095 in network mk-default-k8s-diff-port-608095
	I1105 19:10:57.972441   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | I1105 19:10:57.972370   75165 retry.go:31] will retry after 4.425954537s: waiting for machine to come up
	I1105 19:10:57.896717   73732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902115   73732 kubeadm.go:739] kubelet initialised
	I1105 19:10:57.902138   73732 kubeadm.go:740] duration metric: took 5.39576ms waiting for restarted kubelet to initialise ...
	I1105 19:10:57.902152   73732 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:10:57.907293   73732 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:10:59.913946   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:02.414802   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:03.663928   74485 start.go:364] duration metric: took 3m10.909065205s to acquireMachinesLock for "old-k8s-version-567666"
	I1105 19:11:03.664023   74485 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:03.664038   74485 fix.go:54] fixHost starting: 
	I1105 19:11:03.664514   74485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:03.664569   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:03.682846   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I1105 19:11:03.683341   74485 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:03.683786   74485 main.go:141] libmachine: Using API Version  1
	I1105 19:11:03.683812   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:03.684219   74485 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:03.684407   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:03.684552   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetState
	I1105 19:11:03.686262   74485 fix.go:112] recreateIfNeeded on old-k8s-version-567666: state=Stopped err=<nil>
	I1105 19:11:03.686295   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	W1105 19:11:03.686440   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:03.688047   74485 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-567666" ...
	I1105 19:11:02.401454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.401980   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Found IP for machine: 192.168.50.10
	I1105 19:11:02.402015   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has current primary IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.402025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserving static IP address...
	I1105 19:11:02.402384   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.402413   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Reserved static IP address: 192.168.50.10
	I1105 19:11:02.402432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | skip adding static IP to network mk-default-k8s-diff-port-608095 - found existing host DHCP lease matching {name: "default-k8s-diff-port-608095", mac: "52:54:00:89:ba:6f", ip: "192.168.50.10"}
	I1105 19:11:02.402445   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Waiting for SSH to be available...
	I1105 19:11:02.402461   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Getting to WaitForSSH function...
	I1105 19:11:02.404454   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404751   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.404778   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.404915   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH client type: external
	I1105 19:11:02.404964   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa (-rw-------)
	I1105 19:11:02.405032   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:02.405059   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | About to run SSH command:
	I1105 19:11:02.405072   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | exit 0
	I1105 19:11:02.526769   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:02.527147   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetConfigRaw
	I1105 19:11:02.527756   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.530014   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530325   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.530357   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.530527   74141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/config.json ...
	I1105 19:11:02.530708   74141 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:02.530728   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:02.530921   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.532868   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533184   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.533215   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.533334   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.533493   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533630   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.533761   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.533930   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.534116   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.534128   74141 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:02.631085   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:02.631114   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631351   74141 buildroot.go:166] provisioning hostname "default-k8s-diff-port-608095"
	I1105 19:11:02.631376   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.631540   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.634037   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634371   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.634400   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.634517   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.634691   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634849   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.634995   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.635136   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.635310   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.635326   74141 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-608095 && echo "default-k8s-diff-port-608095" | sudo tee /etc/hostname
	I1105 19:11:02.744298   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-608095
	
	I1105 19:11:02.744327   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.747036   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747348   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.747379   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.747555   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:02.747716   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747846   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:02.747940   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:02.748061   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:02.748266   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:02.748284   74141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-608095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-608095/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-608095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:02.850828   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:02.850854   74141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:02.850906   74141 buildroot.go:174] setting up certificates
	I1105 19:11:02.850923   74141 provision.go:84] configureAuth start
	I1105 19:11:02.850935   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetMachineName
	I1105 19:11:02.851260   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:02.853803   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854062   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.854088   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.854203   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:02.856341   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856629   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:02.856659   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:02.856747   74141 provision.go:143] copyHostCerts
	I1105 19:11:02.856804   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:02.856823   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:02.856874   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:02.856987   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:02.856997   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:02.857017   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:02.857075   74141 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:02.857082   74141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:02.857100   74141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:02.857148   74141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-608095 san=[127.0.0.1 192.168.50.10 default-k8s-diff-port-608095 localhost minikube]
	I1105 19:11:03.048307   74141 provision.go:177] copyRemoteCerts
	I1105 19:11:03.048362   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:03.048386   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.050951   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051303   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.051353   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.051556   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.051785   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.051953   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.052084   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.128441   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:03.150680   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1105 19:11:03.172480   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:03.194311   74141 provision.go:87] duration metric: took 343.374586ms to configureAuth
	I1105 19:11:03.194338   74141 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:03.194499   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:03.194560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.197209   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197585   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.197603   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.197822   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.198006   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198168   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.198336   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.198503   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.198686   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.198706   74141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:03.429895   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:03.429926   74141 machine.go:96] duration metric: took 899.201597ms to provisionDockerMachine
	I1105 19:11:03.429941   74141 start.go:293] postStartSetup for "default-k8s-diff-port-608095" (driver="kvm2")
	I1105 19:11:03.429955   74141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:03.429976   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.430329   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:03.430364   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.433455   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.433791   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.433820   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.434009   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.434323   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.434500   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.434659   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.514652   74141 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:03.518678   74141 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:03.518711   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:03.518774   74141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:03.518877   74141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:03.519014   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:03.528972   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:03.555892   74141 start.go:296] duration metric: took 125.936355ms for postStartSetup
	I1105 19:11:03.555939   74141 fix.go:56] duration metric: took 19.896481237s for fixHost
	I1105 19:11:03.555966   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.558764   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559153   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.559183   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.559402   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.559610   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559788   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.559933   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.560116   74141 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:03.560292   74141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I1105 19:11:03.560303   74141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:03.663723   74141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833863.637227261
	
	I1105 19:11:03.663751   74141 fix.go:216] guest clock: 1730833863.637227261
	I1105 19:11:03.663766   74141 fix.go:229] Guest: 2024-11-05 19:11:03.637227261 +0000 UTC Remote: 2024-11-05 19:11:03.555945261 +0000 UTC m=+239.048686257 (delta=81.282ms)
	I1105 19:11:03.663815   74141 fix.go:200] guest clock delta is within tolerance: 81.282ms
	I1105 19:11:03.663822   74141 start.go:83] releasing machines lock for "default-k8s-diff-port-608095", held for 20.004399519s
	I1105 19:11:03.663858   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.664158   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:03.666922   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667372   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.667408   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.667560   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668101   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668297   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:03.668412   74141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:03.668478   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.668748   74141 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:03.668774   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:03.671463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671781   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.671810   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.671903   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672025   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672175   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672333   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.672369   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:03.672417   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:03.672578   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.672598   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:03.672779   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:03.672925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:03.673106   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:03.777585   74141 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:03.783343   74141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:03.927951   74141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:03.933308   74141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:03.933380   74141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:03.948472   74141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:03.948499   74141 start.go:495] detecting cgroup driver to use...
	I1105 19:11:03.948572   74141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:03.963929   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:03.978578   74141 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:03.978643   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:03.992096   74141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:04.006036   74141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:04.114061   74141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:04.274136   74141 docker.go:233] disabling docker service ...
	I1105 19:11:04.274220   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:04.287806   74141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:04.300294   74141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:04.429899   74141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:04.576075   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:04.590934   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:04.611299   74141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:04.611375   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.623876   74141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:04.623949   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.634333   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.644768   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.654549   74141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:04.665001   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.675464   74141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.693845   74141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:04.703982   74141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:04.713758   74141 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:04.713820   74141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:04.727618   74141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:04.737679   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:04.866928   74141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:04.966529   74141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:04.966599   74141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:04.971536   74141 start.go:563] Will wait 60s for crictl version
	I1105 19:11:04.971602   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:11:04.975344   74141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:05.015910   74141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:05.015987   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.043577   74141 ssh_runner.go:195] Run: crio --version
	I1105 19:11:05.072767   74141 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:03.689374   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .Start
	I1105 19:11:03.689560   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring networks are active...
	I1105 19:11:03.690290   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network default is active
	I1105 19:11:03.690659   74485 main.go:141] libmachine: (old-k8s-version-567666) Ensuring network mk-old-k8s-version-567666 is active
	I1105 19:11:03.691130   74485 main.go:141] libmachine: (old-k8s-version-567666) Getting domain xml...
	I1105 19:11:03.691890   74485 main.go:141] libmachine: (old-k8s-version-567666) Creating domain...
	I1105 19:11:05.006949   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting to get IP...
	I1105 19:11:05.008062   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.008547   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.008605   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.008523   75309 retry.go:31] will retry after 290.124771ms: waiting for machine to come up
	I1105 19:11:05.300185   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.300768   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.300803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.300717   75309 retry.go:31] will retry after 292.829683ms: waiting for machine to come up
	I1105 19:11:05.595365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:05.595881   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:05.595907   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:05.595831   75309 retry.go:31] will retry after 447.168257ms: waiting for machine to come up
	I1105 19:11:06.045320   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.045946   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.045976   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.045893   75309 retry.go:31] will retry after 420.272812ms: waiting for machine to come up
	I1105 19:11:06.467556   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:06.468012   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:06.468039   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:06.467962   75309 retry.go:31] will retry after 657.733497ms: waiting for machine to come up
	I1105 19:11:07.128022   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:07.128531   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:07.128559   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:07.128484   75309 retry.go:31] will retry after 922.664226ms: waiting for machine to come up
	I1105 19:11:04.416533   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:06.915445   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:07.417579   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:07.417610   73732 pod_ready.go:82] duration metric: took 9.510292246s for pod "coredns-7c65d6cfc9-nwzpq" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:07.417620   73732 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:05.073913   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetIP
	I1105 19:11:05.077086   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077430   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:05.077468   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:05.077691   74141 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:05.081724   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:05.093668   74141 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:05.093785   74141 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:05.093853   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:05.128693   74141 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:05.128753   74141 ssh_runner.go:195] Run: which lz4
	I1105 19:11:05.133116   74141 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:05.137101   74141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:05.137126   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1105 19:11:06.379012   74141 crio.go:462] duration metric: took 1.245926141s to copy over tarball
	I1105 19:11:06.379088   74141 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:08.545369   74141 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.166238549s)
	I1105 19:11:08.545405   74141 crio.go:469] duration metric: took 2.166364449s to extract the tarball
	I1105 19:11:08.545422   74141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:08.581651   74141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:08.628768   74141 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 19:11:08.628795   74141 cache_images.go:84] Images are preloaded, skipping loading
	I1105 19:11:08.628805   74141 kubeadm.go:934] updating node { 192.168.50.10 8444 v1.31.2 crio true true} ...
	I1105 19:11:08.628937   74141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-608095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:08.629056   74141 ssh_runner.go:195] Run: crio config
	I1105 19:11:08.690112   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:08.690140   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:08.690152   74141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:08.690184   74141 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-608095 NodeName:default-k8s-diff-port-608095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:08.690346   74141 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-608095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:08.690415   74141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:08.700222   74141 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:08.700294   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:08.709542   74141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1105 19:11:08.725723   74141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:08.741985   74141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1105 19:11:08.758655   74141 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:08.762296   74141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:08.774119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:08.910000   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:08.926765   74141 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095 for IP: 192.168.50.10
	I1105 19:11:08.926788   74141 certs.go:194] generating shared ca certs ...
	I1105 19:11:08.926806   74141 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:08.927006   74141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:08.927069   74141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:08.927080   74141 certs.go:256] generating profile certs ...
	I1105 19:11:08.927157   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/client.key
	I1105 19:11:08.927229   74141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key.f2b96156
	I1105 19:11:08.927281   74141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key
	I1105 19:11:08.927456   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:08.927506   74141 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:08.927516   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:08.927549   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:08.927585   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:08.927620   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:08.927682   74141 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:08.928417   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:08.971359   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:09.011632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:09.049748   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:09.078632   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 19:11:09.105786   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:09.127855   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:09.151461   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/default-k8s-diff-port-608095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 19:11:09.174068   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:09.196733   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:09.219111   74141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:09.241335   74141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:09.257040   74141 ssh_runner.go:195] Run: openssl version
	I1105 19:11:09.262371   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:09.272232   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276300   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.276362   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:09.281747   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:09.291864   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:09.302012   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306085   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.306142   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:09.311374   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:09.321334   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:09.331208   74141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335401   74141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.335451   74141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:09.340595   74141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:09.350430   74141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:09.354622   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:09.360165   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:09.365624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:09.371545   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:09.377226   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:09.382624   74141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:09.387929   74141 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-608095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-608095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:09.388032   74141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:09.388076   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.429707   74141 cri.go:89] found id: ""
	I1105 19:11:09.429783   74141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:09.440455   74141 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:09.440476   74141 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:09.440527   74141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:09.451745   74141 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:09.452609   74141 kubeconfig.go:125] found "default-k8s-diff-port-608095" server: "https://192.168.50.10:8444"
	I1105 19:11:09.454539   74141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:09.463900   74141 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.10
	I1105 19:11:09.463926   74141 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:09.463936   74141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:09.463987   74141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:09.497583   74141 cri.go:89] found id: ""
	I1105 19:11:09.497656   74141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:09.513767   74141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:09.523219   74141 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:09.523237   74141 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:09.523284   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1105 19:11:09.533116   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:09.533181   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:09.542453   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1105 19:11:08.053120   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:08.053610   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:08.053636   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:08.053587   75309 retry.go:31] will retry after 947.415519ms: waiting for machine to come up
	I1105 19:11:09.002803   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:09.003423   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:09.003452   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:09.003363   75309 retry.go:31] will retry after 1.07978111s: waiting for machine to come up
	I1105 19:11:10.084404   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:10.084808   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:10.084830   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:10.084784   75309 retry.go:31] will retry after 1.482510322s: waiting for machine to come up
	I1105 19:11:11.568421   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:11.568840   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:11.568869   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:11.568791   75309 retry.go:31] will retry after 1.630983434s: waiting for machine to come up
	I1105 19:11:08.426308   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.426337   73732 pod_ready.go:82] duration metric: took 1.008708779s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.426350   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432238   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.432264   73732 pod_ready.go:82] duration metric: took 5.905051ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.432276   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438187   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.438214   73732 pod_ready.go:82] duration metric: took 5.9294ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.438226   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443794   73732 pod_ready.go:93] pod "kube-proxy-f945s" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:08.443823   73732 pod_ready.go:82] duration metric: took 5.587862ms for pod "kube-proxy-f945s" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:08.443835   73732 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:10.449498   73732 pod_ready.go:103] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:12.454934   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:12.454965   73732 pod_ready.go:82] duration metric: took 4.011121022s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:12.455003   73732 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:09.551174   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:09.551235   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:09.560481   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.571928   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:09.571997   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:09.583935   74141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1105 19:11:09.595336   74141 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:09.595401   74141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:09.605061   74141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:09.613920   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:09.718759   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.680100   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.901034   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.951868   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:10.997866   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:10.997956   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.498113   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:11.998192   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.498517   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:12.998919   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:13.013078   74141 api_server.go:72] duration metric: took 2.01520799s to wait for apiserver process to appear ...
	I1105 19:11:13.013106   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:11:13.013136   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.042333   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.042388   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.042404   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.085574   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:11:16.085602   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:11:16.513733   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:16.518755   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:16.518789   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.013278   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.019214   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:11:17.019236   74141 api_server.go:103] status: https://192.168.50.10:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:11:17.513886   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:11:17.519036   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:11:17.528970   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:11:17.529000   74141 api_server.go:131] duration metric: took 4.515887773s to wait for apiserver health ...
	I1105 19:11:17.529009   74141 cni.go:84] Creating CNI manager for ""
	I1105 19:11:17.529016   74141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:17.530429   74141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:11:13.201891   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:13.202425   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:13.202453   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:13.202387   75309 retry.go:31] will retry after 2.689744765s: waiting for machine to come up
	I1105 19:11:15.893632   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:15.893989   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:15.894034   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:15.893964   75309 retry.go:31] will retry after 2.460566804s: waiting for machine to come up
	I1105 19:11:14.465748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:16.961287   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:17.531600   74141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:11:17.544876   74141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:11:17.567835   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:11:17.583925   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:11:17.583976   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:11:17.583988   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:11:17.583999   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:11:17.584015   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:11:17.584027   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:11:17.584041   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:11:17.584052   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:11:17.584060   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:11:17.584068   74141 system_pods.go:74] duration metric: took 16.206948ms to wait for pod list to return data ...
	I1105 19:11:17.584081   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:11:17.593935   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:11:17.593960   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:11:17.593971   74141 node_conditions.go:105] duration metric: took 9.883295ms to run NodePressure ...
	I1105 19:11:17.593988   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:17.929181   74141 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933853   74141 kubeadm.go:739] kubelet initialised
	I1105 19:11:17.933879   74141 kubeadm.go:740] duration metric: took 4.667992ms waiting for restarted kubelet to initialise ...
	I1105 19:11:17.933888   74141 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:17.940560   74141 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.952799   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952832   74141 pod_ready.go:82] duration metric: took 12.240861ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.952845   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.952856   74141 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.959079   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959105   74141 pod_ready.go:82] duration metric: took 6.23649ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.959119   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.959130   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.963797   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963817   74141 pod_ready.go:82] duration metric: took 4.681011ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.963830   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.963837   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:17.970915   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970935   74141 pod_ready.go:82] duration metric: took 7.091116ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:17.970945   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:17.970951   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.371478   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371503   74141 pod_ready.go:82] duration metric: took 400.5454ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.371512   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-proxy-8v42c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.371519   74141 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:18.771731   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771768   74141 pod_ready.go:82] duration metric: took 400.239012ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:18.771783   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:18.771792   74141 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:19.171239   74141 pod_ready.go:98] node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171271   74141 pod_ready.go:82] duration metric: took 399.46983ms for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:11:19.171286   74141 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-608095" hosting pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:19.171296   74141 pod_ready.go:39] duration metric: took 1.237397637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:19.171315   74141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:11:19.185845   74141 ops.go:34] apiserver oom_adj: -16
	I1105 19:11:19.185869   74141 kubeadm.go:597] duration metric: took 9.745385943s to restartPrimaryControlPlane
	I1105 19:11:19.185880   74141 kubeadm.go:394] duration metric: took 9.797958845s to StartCluster
	I1105 19:11:19.185901   74141 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.185989   74141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:19.187722   74141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:19.187971   74141 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.10 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:11:19.188036   74141 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:11:19.188142   74141 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188160   74141 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-608095"
	I1105 19:11:19.188159   74141 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-608095"
	W1105 19:11:19.188171   74141 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:11:19.188199   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188236   74141 config.go:182] Loaded profile config "default-k8s-diff-port-608095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:19.188248   74141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-608095"
	I1105 19:11:19.188273   74141 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-608095"
	I1105 19:11:19.188310   74141 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.188323   74141 addons.go:243] addon metrics-server should already be in state true
	I1105 19:11:19.188379   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.188526   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188569   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188674   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188725   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.188802   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.188823   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.189792   74141 out.go:177] * Verifying Kubernetes components...
	I1105 19:11:19.191119   74141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:19.203875   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I1105 19:11:19.204313   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.204803   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.204830   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.205083   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I1105 19:11:19.205175   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.205432   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.205488   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.205973   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.205999   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.206357   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.206916   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.206955   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.207292   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I1105 19:11:19.207671   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.208122   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.208146   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.208484   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.208861   74141 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-608095"
	W1105 19:11:19.208882   74141 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:11:19.208909   74141 host.go:66] Checking if "default-k8s-diff-port-608095" exists ...
	I1105 19:11:19.209004   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209045   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.209234   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.209273   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.223963   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I1105 19:11:19.224405   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.225044   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.225074   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.225460   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.226141   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I1105 19:11:19.226463   74141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:19.226509   74141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:19.226577   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.226757   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I1105 19:11:19.227058   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.227081   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.227475   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.227558   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.227797   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.228116   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.228136   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.228530   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.228755   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.229870   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.230471   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.232239   74141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:19.232263   74141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:11:19.233508   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:11:19.233527   74141 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:11:19.233548   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.233607   74141 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.233626   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:11:19.233647   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.237337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237365   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237895   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237928   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.237958   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.237972   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.238155   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238270   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.238337   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238440   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.238463   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238623   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.238681   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.239040   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.243685   74141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1105 19:11:19.244073   74141 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:19.244584   74141 main.go:141] libmachine: Using API Version  1
	I1105 19:11:19.244602   74141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:19.244951   74141 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:19.245112   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetState
	I1105 19:11:19.246617   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .DriverName
	I1105 19:11:19.246814   74141 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.246830   74141 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:11:19.246845   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHHostname
	I1105 19:11:19.249467   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.249896   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:ba:6f", ip: ""} in network mk-default-k8s-diff-port-608095: {Iface:virbr2 ExpiryTime:2024-11-05 20:10:54 +0000 UTC Type:0 Mac:52:54:00:89:ba:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:default-k8s-diff-port-608095 Clientid:01:52:54:00:89:ba:6f}
	I1105 19:11:19.249925   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | domain default-k8s-diff-port-608095 has defined IP address 192.168.50.10 and MAC address 52:54:00:89:ba:6f in network mk-default-k8s-diff-port-608095
	I1105 19:11:19.250139   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHPort
	I1105 19:11:19.250317   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHKeyPath
	I1105 19:11:19.250466   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .GetSSHUsername
	I1105 19:11:19.250636   74141 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/default-k8s-diff-port-608095/id_rsa Username:docker}
	I1105 19:11:19.396917   74141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:19.412224   74141 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:19.541493   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:11:19.566934   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:11:19.566982   74141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:11:19.567627   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:11:19.607685   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:11:19.607717   74141 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:11:19.640921   74141 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:19.640959   74141 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:11:19.674550   74141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:11:20.091222   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091248   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091528   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091583   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091596   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.091605   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.091807   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.091868   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.091853   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.105073   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.105093   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.105426   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.105442   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719139   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.151476995s)
	I1105 19:11:20.719187   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719200   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719194   74141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.044605505s)
	I1105 19:11:20.719236   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719256   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719511   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719582   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719593   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719596   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719631   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719580   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719643   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719654   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719670   74141 main.go:141] libmachine: Making call to close driver server
	I1105 19:11:20.719680   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) Calling .Close
	I1105 19:11:20.719897   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719946   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719948   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.719903   74141 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:11:20.719982   74141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:11:20.719990   74141 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-608095"
	I1105 19:11:20.719927   74141 main.go:141] libmachine: (default-k8s-diff-port-608095) DBG | Closing plugin on server side
	I1105 19:11:20.721843   74141 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1105 19:11:22.583507   73496 start.go:364] duration metric: took 54.335724939s to acquireMachinesLock for "no-preload-459223"
	I1105 19:11:22.583581   73496 start.go:96] Skipping create...Using existing machine configuration
	I1105 19:11:22.583590   73496 fix.go:54] fixHost starting: 
	I1105 19:11:22.584018   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:11:22.584054   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:11:22.603921   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I1105 19:11:22.604367   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:11:22.604825   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:11:22.604845   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:11:22.605233   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:11:22.605408   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:22.605534   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:11:22.607289   73496 fix.go:112] recreateIfNeeded on no-preload-459223: state=Stopped err=<nil>
	I1105 19:11:22.607314   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	W1105 19:11:22.607458   73496 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 19:11:22.609455   73496 out.go:177] * Restarting existing kvm2 VM for "no-preload-459223" ...
	I1105 19:11:18.357643   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:18.358065   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | unable to find current IP address of domain old-k8s-version-567666 in network mk-old-k8s-version-567666
	I1105 19:11:18.358099   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | I1105 19:11:18.358009   75309 retry.go:31] will retry after 3.036834524s: waiting for machine to come up
	I1105 19:11:21.398221   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398763   74485 main.go:141] libmachine: (old-k8s-version-567666) Found IP for machine: 192.168.61.125
	I1105 19:11:21.398825   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has current primary IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.398843   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserving static IP address...
	I1105 19:11:21.399327   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.399350   74485 main.go:141] libmachine: (old-k8s-version-567666) Reserved static IP address: 192.168.61.125
	I1105 19:11:21.399365   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | skip adding static IP to network mk-old-k8s-version-567666 - found existing host DHCP lease matching {name: "old-k8s-version-567666", mac: "52:54:00:19:75:85", ip: "192.168.61.125"}
	I1105 19:11:21.399379   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Getting to WaitForSSH function...
	I1105 19:11:21.399394   74485 main.go:141] libmachine: (old-k8s-version-567666) Waiting for SSH to be available...
	I1105 19:11:21.401270   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401664   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.401691   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.401866   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH client type: external
	I1105 19:11:21.401897   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa (-rw-------)
	I1105 19:11:21.401935   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:21.401949   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | About to run SSH command:
	I1105 19:11:21.401959   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | exit 0
	I1105 19:11:21.527815   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:21.528165   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetConfigRaw
	I1105 19:11:21.528874   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.531373   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531647   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.531672   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.531876   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/config.json ...
	I1105 19:11:21.532071   74485 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:21.532092   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:21.532332   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.534177   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534431   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.534465   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.534556   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.534716   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534845   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.534960   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.535142   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.535329   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.535341   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:21.643321   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:21.643354   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643618   74485 buildroot.go:166] provisioning hostname "old-k8s-version-567666"
	I1105 19:11:21.643646   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.643812   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.646230   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646628   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.646666   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.646839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.647037   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647167   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.647290   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.647421   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.647579   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.647592   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-567666 && echo "old-k8s-version-567666" | sudo tee /etc/hostname
	I1105 19:11:21.770209   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-567666
	
	I1105 19:11:21.770255   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.772932   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773314   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.773346   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.773484   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.773691   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773839   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.773950   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.774121   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:21.774357   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:21.774386   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-567666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-567666/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-567666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:21.890834   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:21.890860   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:21.890915   74485 buildroot.go:174] setting up certificates
	I1105 19:11:21.890929   74485 provision.go:84] configureAuth start
	I1105 19:11:21.890944   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetMachineName
	I1105 19:11:21.891224   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:21.893835   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894256   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.894285   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.894385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.896436   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896699   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.896715   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.896893   74485 provision.go:143] copyHostCerts
	I1105 19:11:21.896951   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:21.896967   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:21.897037   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:21.897163   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:21.897176   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:21.897205   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:21.897279   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:21.897289   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:21.897315   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:21.897396   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-567666 san=[127.0.0.1 192.168.61.125 localhost minikube old-k8s-version-567666]
	I1105 19:11:21.962153   74485 provision.go:177] copyRemoteCerts
	I1105 19:11:21.962219   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:21.962257   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:21.964765   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965125   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:21.965166   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:21.965330   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:21.965478   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:21.965603   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:21.965746   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.048519   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:22.072975   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1105 19:11:22.098263   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:22.120258   74485 provision.go:87] duration metric: took 229.316972ms to configureAuth
	I1105 19:11:22.120285   74485 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:22.120444   74485 config.go:182] Loaded profile config "old-k8s-version-567666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1105 19:11:22.120516   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.123859   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124309   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.124344   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.124536   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.124737   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.124922   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.125055   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.125213   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.125375   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.125388   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:22.349922   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:22.349964   74485 machine.go:96] duration metric: took 817.87332ms to provisionDockerMachine
	I1105 19:11:22.349979   74485 start.go:293] postStartSetup for "old-k8s-version-567666" (driver="kvm2")
	I1105 19:11:22.349992   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:22.350014   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.350350   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:22.350385   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.352922   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353310   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.353332   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.353459   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.353638   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.353807   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.353921   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.437482   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:22.441617   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:22.441646   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:22.441711   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:22.441807   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:22.441929   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:22.451016   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:22.474199   74485 start.go:296] duration metric: took 124.207336ms for postStartSetup
	I1105 19:11:22.474233   74485 fix.go:56] duration metric: took 18.810197154s for fixHost
	I1105 19:11:22.474269   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.476786   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477119   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.477157   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.477279   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.477471   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477621   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.477753   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.477910   74485 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:22.478070   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I1105 19:11:22.478081   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:22.583343   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833882.558222038
	
	I1105 19:11:22.583363   74485 fix.go:216] guest clock: 1730833882.558222038
	I1105 19:11:22.583372   74485 fix.go:229] Guest: 2024-11-05 19:11:22.558222038 +0000 UTC Remote: 2024-11-05 19:11:22.474236871 +0000 UTC m=+209.862783450 (delta=83.985167ms)
	I1105 19:11:22.583418   74485 fix.go:200] guest clock delta is within tolerance: 83.985167ms
	I1105 19:11:22.583429   74485 start.go:83] releasing machines lock for "old-k8s-version-567666", held for 18.919444623s
	I1105 19:11:22.583460   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.583717   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:22.586183   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586479   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.586509   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.586687   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587137   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587310   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .DriverName
	I1105 19:11:22.587400   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:22.587448   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.587521   74485 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:22.587548   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHHostname
	I1105 19:11:22.590145   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590474   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.590507   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590530   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.590655   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.590831   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.590995   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:22.591010   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591037   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:22.591179   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:22.591286   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHPort
	I1105 19:11:22.591438   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHKeyPath
	I1105 19:11:22.591558   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetSSHUsername
	I1105 19:11:22.591702   74485 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/old-k8s-version-567666/id_rsa Username:docker}
	I1105 19:11:19.461723   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:21.962582   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:22.702707   74485 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:22.708965   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:22.856764   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:22.863791   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:22.863866   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:22.883997   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:22.884022   74485 start.go:495] detecting cgroup driver to use...
	I1105 19:11:22.884094   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:22.901499   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:22.919358   74485 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:22.919422   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:22.936964   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:22.953538   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:23.077720   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:23.218316   74485 docker.go:233] disabling docker service ...
	I1105 19:11:23.218390   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:23.238316   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:23.251814   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:23.427386   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:23.552928   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:23.567149   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:23.587241   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1105 19:11:23.587307   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.597558   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:23.597620   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.607466   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.616794   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:23.626425   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:23.637121   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:23.649243   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:23.649305   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:23.664648   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:23.675060   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:23.812636   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:23.903326   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:23.903404   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:23.908377   74485 start.go:563] Will wait 60s for crictl version
	I1105 19:11:23.908434   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:23.912163   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:23.961712   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:23.961794   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:23.992951   74485 ssh_runner.go:195] Run: crio --version
	I1105 19:11:24.032041   74485 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1105 19:11:20.723316   74141 addons.go:510] duration metric: took 1.53528546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1105 19:11:21.416385   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:23.416458   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:22.610737   73496 main.go:141] libmachine: (no-preload-459223) Calling .Start
	I1105 19:11:22.610910   73496 main.go:141] libmachine: (no-preload-459223) Ensuring networks are active...
	I1105 19:11:22.611680   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network default is active
	I1105 19:11:22.612057   73496 main.go:141] libmachine: (no-preload-459223) Ensuring network mk-no-preload-459223 is active
	I1105 19:11:22.612426   73496 main.go:141] libmachine: (no-preload-459223) Getting domain xml...
	I1105 19:11:22.613081   73496 main.go:141] libmachine: (no-preload-459223) Creating domain...
	I1105 19:11:24.013821   73496 main.go:141] libmachine: (no-preload-459223) Waiting to get IP...
	I1105 19:11:24.014922   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.015467   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.015561   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.015439   75501 retry.go:31] will retry after 233.461829ms: waiting for machine to come up
	I1105 19:11:24.251339   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.252673   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.252799   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.252760   75501 retry.go:31] will retry after 276.401207ms: waiting for machine to come up
	I1105 19:11:24.531408   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.531964   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.531987   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.531909   75501 retry.go:31] will retry after 367.69826ms: waiting for machine to come up
	I1105 19:11:24.901179   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:24.901579   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:24.901608   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:24.901536   75501 retry.go:31] will retry after 602.654501ms: waiting for machine to come up
	I1105 19:11:25.505889   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:25.506403   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:25.506426   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:25.506364   75501 retry.go:31] will retry after 492.077165ms: waiting for machine to come up
	I1105 19:11:24.033400   74485 main.go:141] libmachine: (old-k8s-version-567666) Calling .GetIP
	I1105 19:11:24.036549   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037128   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:75:85", ip: ""} in network mk-old-k8s-version-567666: {Iface:virbr3 ExpiryTime:2024-11-05 20:11:14 +0000 UTC Type:0 Mac:52:54:00:19:75:85 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:old-k8s-version-567666 Clientid:01:52:54:00:19:75:85}
	I1105 19:11:24.037165   74485 main.go:141] libmachine: (old-k8s-version-567666) DBG | domain old-k8s-version-567666 has defined IP address 192.168.61.125 and MAC address 52:54:00:19:75:85 in network mk-old-k8s-version-567666
	I1105 19:11:24.037346   74485 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:24.042641   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:24.055174   74485 kubeadm.go:883] updating cluster {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:24.055327   74485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 19:11:24.055388   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:24.101655   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:24.101724   74485 ssh_runner.go:195] Run: which lz4
	I1105 19:11:24.105618   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 19:11:24.109705   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 19:11:24.109735   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1105 19:11:25.602158   74485 crio.go:462] duration metric: took 1.496564307s to copy over tarball
	I1105 19:11:25.602236   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 19:11:23.963218   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:26.461963   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:25.419351   74141 node_ready.go:53] node "default-k8s-diff-port-608095" has status "Ready":"False"
	I1105 19:11:26.916693   74141 node_ready.go:49] node "default-k8s-diff-port-608095" has status "Ready":"True"
	I1105 19:11:26.916731   74141 node_ready.go:38] duration metric: took 7.50447744s for node "default-k8s-diff-port-608095" to be "Ready" ...
	I1105 19:11:26.916744   74141 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:11:26.922179   74141 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927845   74141 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.927879   74141 pod_ready.go:82] duration metric: took 5.666725ms for pod "coredns-7c65d6cfc9-cdvml" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.927892   74141 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932723   74141 pod_ready.go:93] pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.932752   74141 pod_ready.go:82] duration metric: took 4.843531ms for pod "etcd-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.932761   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937108   74141 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.937137   74141 pod_ready.go:82] duration metric: took 4.368536ms for pod "kube-apiserver-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.937152   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.941970   74141 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:26.941995   74141 pod_ready.go:82] duration metric: took 4.833418ms for pod "kube-controller-manager-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.942008   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317480   74141 pod_ready.go:93] pod "kube-proxy-8v42c" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.317505   74141 pod_ready.go:82] duration metric: took 375.489077ms for pod "kube-proxy-8v42c" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.317517   74141 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717923   74141 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace has status "Ready":"True"
	I1105 19:11:27.717945   74141 pod_ready.go:82] duration metric: took 400.42059ms for pod "kube-scheduler-default-k8s-diff-port-608095" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:27.717956   74141 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	I1105 19:11:26.000041   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.000558   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.000613   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.000525   75501 retry.go:31] will retry after 920.198126ms: waiting for machine to come up
	I1105 19:11:26.922134   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:26.922917   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:26.922951   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:26.922858   75501 retry.go:31] will retry after 1.071853506s: waiting for machine to come up
	I1105 19:11:27.996574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:27.996995   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:27.997020   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:27.996949   75501 retry.go:31] will retry after 1.283200825s: waiting for machine to come up
	I1105 19:11:29.282457   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:29.282942   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:29.282979   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:29.282903   75501 retry.go:31] will retry after 1.512809658s: waiting for machine to come up
	I1105 19:11:28.701223   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.098952901s)
	I1105 19:11:28.701253   74485 crio.go:469] duration metric: took 3.099065633s to extract the tarball
	I1105 19:11:28.701263   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 19:11:28.744214   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:28.778845   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1105 19:11:28.778868   74485 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:28.778962   74485 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:28.778945   74485 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.779024   74485 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.779039   74485 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.778939   74485 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.779067   74485 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.779083   74485 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.778957   74485 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781024   74485 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1105 19:11:28.781003   74485 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:28.781009   74485 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:28.781052   74485 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:28.781002   74485 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:28.781088   74485 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:28.781114   74485 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.013637   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1105 19:11:29.043928   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.043936   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.044140   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.045892   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.046313   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.055792   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.081724   74485 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1105 19:11:29.081779   74485 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1105 19:11:29.081826   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.234925   74485 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1105 19:11:29.234966   74485 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.235046   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235079   74485 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1105 19:11:29.235112   74485 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.235136   74485 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1105 19:11:29.235152   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235167   74485 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.235200   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235238   74485 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1105 19:11:29.235277   74485 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.235298   74485 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1105 19:11:29.235320   74485 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.235333   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235352   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235351   74485 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1105 19:11:29.235385   74485 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.235415   74485 ssh_runner.go:195] Run: which crictl
	I1105 19:11:29.235426   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.251787   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.251873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.251960   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.251985   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.252000   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.371298   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.415548   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.415592   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.415654   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.415710   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.415791   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.415868   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.466873   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1105 19:11:29.544593   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1105 19:11:29.544660   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1105 19:11:29.586695   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1105 19:11:29.586714   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1105 19:11:29.586812   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1105 19:11:29.586916   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1105 19:11:29.606582   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1105 19:11:29.707767   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1105 19:11:29.707803   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1105 19:11:29.716195   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1105 19:11:29.723097   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1105 19:11:29.723153   74485 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1105 19:11:30.039971   74485 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:30.182760   74485 cache_images.go:92] duration metric: took 1.403874987s to LoadCachedImages
	W1105 19:11:30.182890   74485 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1105 19:11:30.182912   74485 kubeadm.go:934] updating node { 192.168.61.125 8443 v1.20.0 crio true true} ...
	I1105 19:11:30.183052   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-567666 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:30.183146   74485 ssh_runner.go:195] Run: crio config
	I1105 19:11:30.235206   74485 cni.go:84] Creating CNI manager for ""
	I1105 19:11:30.235241   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:30.235253   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:30.235277   74485 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-567666 NodeName:old-k8s-version-567666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1105 19:11:30.235433   74485 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-567666"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:30.235503   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1105 19:11:30.245189   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:30.245263   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:30.254772   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1105 19:11:30.271711   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:30.288568   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1105 19:11:30.309098   74485 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:30.313211   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:30.325637   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:30.447346   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:30.466863   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666 for IP: 192.168.61.125
	I1105 19:11:30.466884   74485 certs.go:194] generating shared ca certs ...
	I1105 19:11:30.466898   74485 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:30.467086   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:30.467152   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:30.467165   74485 certs.go:256] generating profile certs ...
	I1105 19:11:30.467322   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/client.key
	I1105 19:11:30.467398   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key.535024f8
	I1105 19:11:30.467448   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key
	I1105 19:11:30.467614   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:30.467656   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:30.467676   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:30.467722   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:30.467759   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:30.467788   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:30.467847   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:30.468756   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:30.532325   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:30.559936   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:30.592995   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:30.632421   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 19:11:30.662285   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 19:11:30.696292   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:30.725642   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/old-k8s-version-567666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:30.750231   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:30.773213   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:30.796269   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:30.820261   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:30.837059   74485 ssh_runner.go:195] Run: openssl version
	I1105 19:11:30.842937   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:30.855033   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859637   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.859720   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:30.865747   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:30.877678   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:30.890762   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895576   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.895642   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:30.901686   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:30.912689   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:30.923800   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928911   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.928984   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:30.934782   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:30.947059   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:30.951934   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:30.958065   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:30.965341   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:30.971725   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:30.977606   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:30.983486   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:30.989212   74485 kubeadm.go:392] StartCluster: {Name:old-k8s-version-567666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-567666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:30.989350   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:30.989411   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.031794   74485 cri.go:89] found id: ""
	I1105 19:11:31.031884   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:31.043178   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:31.043202   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:31.043291   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:31.054102   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:31.055256   74485 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-567666" does not appear in /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:11:31.055924   74485 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-8296/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-567666" cluster setting kubeconfig missing "old-k8s-version-567666" context setting]
	I1105 19:11:31.056913   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:31.064220   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:31.074582   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.125
	I1105 19:11:31.074618   74485 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:31.074628   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:31.074706   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:31.111157   74485 cri.go:89] found id: ""
	I1105 19:11:31.111241   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:31.130027   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:31.139917   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:31.139939   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:31.140007   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:31.150790   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:31.150868   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:31.161397   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:31.170394   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:31.170462   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:31.179594   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.188892   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:31.188952   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:31.199840   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:31.209166   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:31.209244   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:31.219687   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:31.231079   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:31.350667   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.094565   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.334807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.457538   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:32.534503   74485 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:11:32.534596   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:28.464017   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.962422   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:29.725325   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:32.225372   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:30.796963   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:30.797438   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:30.797489   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:30.797407   75501 retry.go:31] will retry after 1.774832047s: waiting for machine to come up
	I1105 19:11:32.574423   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:32.575000   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:32.575047   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:32.574929   75501 retry.go:31] will retry after 2.041093372s: waiting for machine to come up
	I1105 19:11:34.618469   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:34.618954   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:34.619015   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:34.618915   75501 retry.go:31] will retry after 2.731949113s: waiting for machine to come up
	I1105 19:11:33.034690   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:33.535594   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.035526   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:34.534836   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.034947   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:35.535108   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.035417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:36.535438   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.034766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:37.535415   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:32.962469   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.963093   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.461010   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:34.724484   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.224511   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:37.352209   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:37.352752   73496 main.go:141] libmachine: (no-preload-459223) DBG | unable to find current IP address of domain no-preload-459223 in network mk-no-preload-459223
	I1105 19:11:37.352783   73496 main.go:141] libmachine: (no-preload-459223) DBG | I1105 19:11:37.352686   75501 retry.go:31] will retry after 3.62202055s: waiting for machine to come up
	I1105 19:11:38.035553   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:38.534702   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.035332   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.534749   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.034989   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:40.535354   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.035624   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:41.534847   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.035293   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:42.535363   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:39.465635   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:41.961348   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:40.978791   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979231   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has current primary IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.979249   73496 main.go:141] libmachine: (no-preload-459223) Found IP for machine: 192.168.72.101
	I1105 19:11:40.979258   73496 main.go:141] libmachine: (no-preload-459223) Reserving static IP address...
	I1105 19:11:40.979621   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.979650   73496 main.go:141] libmachine: (no-preload-459223) Reserved static IP address: 192.168.72.101
	I1105 19:11:40.979669   73496 main.go:141] libmachine: (no-preload-459223) DBG | skip adding static IP to network mk-no-preload-459223 - found existing host DHCP lease matching {name: "no-preload-459223", mac: "52:54:00:6c:84:79", ip: "192.168.72.101"}
	I1105 19:11:40.979682   73496 main.go:141] libmachine: (no-preload-459223) Waiting for SSH to be available...
	I1105 19:11:40.979710   73496 main.go:141] libmachine: (no-preload-459223) DBG | Getting to WaitForSSH function...
	I1105 19:11:40.981725   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:40.982063   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:40.982202   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH client type: external
	I1105 19:11:40.982227   73496 main.go:141] libmachine: (no-preload-459223) DBG | Using SSH private key: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa (-rw-------)
	I1105 19:11:40.982258   73496 main.go:141] libmachine: (no-preload-459223) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1105 19:11:40.982286   73496 main.go:141] libmachine: (no-preload-459223) DBG | About to run SSH command:
	I1105 19:11:40.982310   73496 main.go:141] libmachine: (no-preload-459223) DBG | exit 0
	I1105 19:11:41.111259   73496 main.go:141] libmachine: (no-preload-459223) DBG | SSH cmd err, output: <nil>: 
	I1105 19:11:41.111639   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetConfigRaw
	I1105 19:11:41.112368   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.114811   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115215   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.115244   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.115499   73496 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/config.json ...
	I1105 19:11:41.115687   73496 machine.go:93] provisionDockerMachine start ...
	I1105 19:11:41.115705   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:41.115900   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.118059   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118481   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.118505   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.118659   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.118833   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.118959   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.119078   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.119222   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.119426   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.119442   73496 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 19:11:41.235030   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 19:11:41.235060   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235270   73496 buildroot.go:166] provisioning hostname "no-preload-459223"
	I1105 19:11:41.235294   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.235480   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.237980   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238288   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.238327   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.238405   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.238567   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238687   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.238805   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.238938   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.239150   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.239163   73496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-459223 && echo "no-preload-459223" | sudo tee /etc/hostname
	I1105 19:11:41.366664   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-459223
	
	I1105 19:11:41.366693   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.369672   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.369979   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.370006   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.370147   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.370335   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.370661   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.370830   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.371067   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.371086   73496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-459223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-459223/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-459223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 19:11:41.495741   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 19:11:41.495774   73496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19910-8296/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-8296/.minikube}
	I1105 19:11:41.495796   73496 buildroot.go:174] setting up certificates
	I1105 19:11:41.495804   73496 provision.go:84] configureAuth start
	I1105 19:11:41.495816   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetMachineName
	I1105 19:11:41.496076   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:41.498948   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499377   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.499409   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.499552   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.501842   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502168   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.502198   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.502367   73496 provision.go:143] copyHostCerts
	I1105 19:11:41.502428   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem, removing ...
	I1105 19:11:41.502445   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem
	I1105 19:11:41.502516   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/ca.pem (1082 bytes)
	I1105 19:11:41.502662   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem, removing ...
	I1105 19:11:41.502674   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem
	I1105 19:11:41.502706   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/cert.pem (1123 bytes)
	I1105 19:11:41.502814   73496 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem, removing ...
	I1105 19:11:41.502825   73496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem
	I1105 19:11:41.502853   73496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-8296/.minikube/key.pem (1675 bytes)
	I1105 19:11:41.502934   73496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem org=jenkins.no-preload-459223 san=[127.0.0.1 192.168.72.101 localhost minikube no-preload-459223]
	I1105 19:11:41.648058   73496 provision.go:177] copyRemoteCerts
	I1105 19:11:41.648115   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 19:11:41.648137   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.650915   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651274   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.651306   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.651518   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.651707   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.651878   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.652032   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:41.736549   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1105 19:11:41.759352   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 19:11:41.782205   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 19:11:41.804725   73496 provision.go:87] duration metric: took 308.906806ms to configureAuth
	I1105 19:11:41.804755   73496 buildroot.go:189] setting minikube options for container-runtime
	I1105 19:11:41.804930   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:11:41.805011   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:41.807634   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808035   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:41.808071   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:41.808312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:41.808498   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808657   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:41.808792   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:41.808960   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:41.809113   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:41.809125   73496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 19:11:42.033406   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 19:11:42.033449   73496 machine.go:96] duration metric: took 917.749182ms to provisionDockerMachine
	I1105 19:11:42.033462   73496 start.go:293] postStartSetup for "no-preload-459223" (driver="kvm2")
	I1105 19:11:42.033475   73496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 19:11:42.033506   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.033853   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 19:11:42.033883   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.037259   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037688   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.037722   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.037869   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.038063   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.038231   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.038361   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.126624   73496 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 19:11:42.130761   73496 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 19:11:42.130794   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/addons for local assets ...
	I1105 19:11:42.130881   73496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-8296/.minikube/files for local assets ...
	I1105 19:11:42.131006   73496 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem -> 154922.pem in /etc/ssl/certs
	I1105 19:11:42.131120   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 19:11:42.140978   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:42.163880   73496 start.go:296] duration metric: took 130.405487ms for postStartSetup
	I1105 19:11:42.163933   73496 fix.go:56] duration metric: took 19.580327925s for fixHost
	I1105 19:11:42.163953   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.166648   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.166994   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.167025   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.167196   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.167394   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167565   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.167705   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.167856   73496 main.go:141] libmachine: Using SSH client type: native
	I1105 19:11:42.168016   73496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.101 22 <nil> <nil>}
	I1105 19:11:42.168025   73496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 19:11:42.279303   73496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730833902.251467447
	
	I1105 19:11:42.279336   73496 fix.go:216] guest clock: 1730833902.251467447
	I1105 19:11:42.279351   73496 fix.go:229] Guest: 2024-11-05 19:11:42.251467447 +0000 UTC Remote: 2024-11-05 19:11:42.163937292 +0000 UTC m=+356.505256250 (delta=87.530155ms)
	I1105 19:11:42.279378   73496 fix.go:200] guest clock delta is within tolerance: 87.530155ms
	I1105 19:11:42.279387   73496 start.go:83] releasing machines lock for "no-preload-459223", held for 19.695831159s
	I1105 19:11:42.279417   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.279660   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:42.282462   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.282828   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.282871   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.283018   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283439   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283580   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:11:42.283669   73496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 19:11:42.283716   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.283811   73496 ssh_runner.go:195] Run: cat /version.json
	I1105 19:11:42.283838   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:11:42.286528   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286754   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.286891   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.286917   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287097   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:42.287112   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287124   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:42.287312   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:11:42.287313   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287495   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:11:42.287510   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287666   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:11:42.287664   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.287769   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:11:42.398511   73496 ssh_runner.go:195] Run: systemctl --version
	I1105 19:11:42.404337   73496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 19:11:42.550196   73496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 19:11:42.555775   73496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 19:11:42.555853   73496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 19:11:42.571003   73496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 19:11:42.571031   73496 start.go:495] detecting cgroup driver to use...
	I1105 19:11:42.571123   73496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 19:11:42.586390   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 19:11:42.599887   73496 docker.go:217] disabling cri-docker service (if available) ...
	I1105 19:11:42.599944   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 19:11:42.613260   73496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 19:11:42.626371   73496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 19:11:42.736949   73496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 19:11:42.898897   73496 docker.go:233] disabling docker service ...
	I1105 19:11:42.898965   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 19:11:42.912534   73496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 19:11:42.925075   73496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 19:11:43.043425   73496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 19:11:43.175468   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 19:11:43.190803   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 19:11:43.210413   73496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 19:11:43.210496   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.221971   73496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 19:11:43.222064   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.232251   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.241540   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.251131   73496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 19:11:43.261218   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.270932   73496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.287905   73496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 19:11:43.297730   73496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 19:11:43.307263   73496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 19:11:43.307319   73496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 19:11:43.319421   73496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 19:11:43.328415   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:43.445798   73496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 19:11:43.532190   73496 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 19:11:43.532284   73496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 19:11:43.536931   73496 start.go:563] Will wait 60s for crictl version
	I1105 19:11:43.536986   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.540525   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 19:11:43.576428   73496 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1105 19:11:43.576540   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.603034   73496 ssh_runner.go:195] Run: crio --version
	I1105 19:11:43.631229   73496 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1105 19:11:39.724162   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:42.224141   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:44.224609   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:43.632482   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetIP
	I1105 19:11:43.634912   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635227   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:11:43.635260   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:11:43.635530   73496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1105 19:11:43.639287   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:43.650818   73496 kubeadm.go:883] updating cluster {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 19:11:43.650963   73496 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 19:11:43.651042   73496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 19:11:43.685392   73496 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1105 19:11:43.685421   73496 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1105 19:11:43.685492   73496 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.685500   73496 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.685517   73496 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.685547   73496 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.685506   73496 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.685569   73496 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.685558   73496 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.685623   73496 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:43.686958   73496 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:43.686979   73496 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.686976   73496 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.686951   73496 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.687017   73496 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.687030   73496 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.687057   73496 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1105 19:11:43.898928   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.914069   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1105 19:11:43.934388   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:43.940664   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:43.947392   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:43.951614   73496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1105 19:11:43.951652   73496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:43.951686   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:43.957000   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.045057   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.075256   73496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1105 19:11:44.075289   73496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1105 19:11:44.075304   73496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.075310   73496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075357   73496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1105 19:11:44.075388   73496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.075350   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075417   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.075481   73496 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1105 19:11:44.075431   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.075511   73496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.075543   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.102803   73496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1105 19:11:44.102856   73496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.102910   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.102916   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:44.133582   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.133640   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.133655   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.133707   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.188042   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.188058   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.272464   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.272500   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.272467   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1105 19:11:44.272531   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.289003   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1105 19:11:44.289126   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1105 19:11:44.411155   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1105 19:11:44.411162   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1105 19:11:44.411248   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1105 19:11:44.411307   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1105 19:11:44.411326   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:44.411361   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1105 19:11:44.411394   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:44.411432   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478064   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1105 19:11:44.478093   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478132   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1105 19:11:44.478152   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1105 19:11:44.478178   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1105 19:11:44.478195   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1105 19:11:44.478211   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1105 19:11:44.478226   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:44.478249   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1105 19:11:44.478257   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:44.478324   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:44.889847   73496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:43.035199   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.534769   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.035551   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:44.535664   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.035103   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:45.535581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.035077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:46.535660   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.035462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:47.534898   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:43.962742   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.462884   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.724058   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:48.727054   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:46.976315   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.498135546s)
	I1105 19:11:46.976348   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1105 19:11:46.976361   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.498084867s)
	I1105 19:11:46.976386   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.498096252s)
	I1105 19:11:46.976392   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.498054417s)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1105 19:11:46.976395   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1105 19:11:46.976411   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1105 19:11:46.976368   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976436   73496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.086553002s)
	I1105 19:11:46.976471   73496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1105 19:11:46.976488   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1105 19:11:46.976506   73496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:46.976551   73496 ssh_runner.go:195] Run: which crictl
	I1105 19:11:49.054369   73496 ssh_runner.go:235] Completed: which crictl: (2.077794607s)
	I1105 19:11:49.054455   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:49.054480   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.077976168s)
	I1105 19:11:49.054497   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1105 19:11:49.054520   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.054551   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1105 19:11:49.089648   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.509600   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455021031s)
	I1105 19:11:50.509639   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1105 19:11:50.509664   73496 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509679   73496 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.419997127s)
	I1105 19:11:50.509719   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1105 19:11:50.509751   73496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:11:50.547301   73496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1105 19:11:50.547416   73496 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:48.035320   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.535496   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.035636   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:49.535445   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.035499   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:50.535722   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.035700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:51.535310   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.035585   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:52.535468   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:48.962134   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.463479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:51.225155   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:53.723881   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:54.139987   73496 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.592545704s)
	I1105 19:11:54.140021   73496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1105 19:11:54.140038   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.630297093s)
	I1105 19:11:54.140058   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1105 19:11:54.140089   73496 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:54.140150   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1105 19:11:53.034919   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.535697   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.035353   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:54.534669   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.034957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:55.534747   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.035331   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:56.534699   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:53.465549   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.961291   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.725153   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:58.224417   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:11:55.887208   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.747032149s)
	I1105 19:11:55.887247   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1105 19:11:55.887278   73496 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:55.887331   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1105 19:11:57.753834   73496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.866475995s)
	I1105 19:11:57.753860   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1105 19:11:57.753879   73496 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:57.753917   73496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1105 19:11:58.605444   73496 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19910-8296/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1105 19:11:58.605490   73496 cache_images.go:123] Successfully loaded all cached images
	I1105 19:11:58.605498   73496 cache_images.go:92] duration metric: took 14.920064519s to LoadCachedImages
	I1105 19:11:58.605512   73496 kubeadm.go:934] updating node { 192.168.72.101 8443 v1.31.2 crio true true} ...
	I1105 19:11:58.605627   73496 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-459223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 19:11:58.605719   73496 ssh_runner.go:195] Run: crio config
	I1105 19:11:58.654396   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:11:58.654422   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:11:58.654432   73496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 19:11:58.654456   73496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.101 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-459223 NodeName:no-preload-459223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 19:11:58.654636   73496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-459223"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.101"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.101"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 19:11:58.654714   73496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 19:11:58.666580   73496 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 19:11:58.666659   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 19:11:58.676390   73496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1105 19:11:58.692426   73496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 19:11:58.708650   73496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1105 19:11:58.727451   73496 ssh_runner.go:195] Run: grep 192.168.72.101	control-plane.minikube.internal$ /etc/hosts
	I1105 19:11:58.731200   73496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 19:11:58.743437   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:11:58.850614   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:11:58.867662   73496 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223 for IP: 192.168.72.101
	I1105 19:11:58.867694   73496 certs.go:194] generating shared ca certs ...
	I1105 19:11:58.867715   73496 certs.go:226] acquiring lock for ca certs: {Name:mkafbb3fef270400ed51116ac606e6c07935f686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:11:58.867896   73496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key
	I1105 19:11:58.867954   73496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key
	I1105 19:11:58.867988   73496 certs.go:256] generating profile certs ...
	I1105 19:11:58.868073   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/client.key
	I1105 19:11:58.868129   73496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key.0f61fe1e
	I1105 19:11:58.868163   73496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key
	I1105 19:11:58.868276   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem (1338 bytes)
	W1105 19:11:58.868316   73496 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492_empty.pem, impossibly tiny 0 bytes
	I1105 19:11:58.868323   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca-key.pem (1675 bytes)
	I1105 19:11:58.868347   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/ca.pem (1082 bytes)
	I1105 19:11:58.868380   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/cert.pem (1123 bytes)
	I1105 19:11:58.868409   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/certs/key.pem (1675 bytes)
	I1105 19:11:58.868450   73496 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem (1708 bytes)
	I1105 19:11:58.869179   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 19:11:58.911433   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1105 19:11:58.947863   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 19:11:58.977511   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 19:11:59.022637   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 19:11:59.060992   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 19:11:59.086516   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 19:11:59.109616   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/no-preload-459223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 19:11:59.135019   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/certs/15492.pem --> /usr/share/ca-certificates/15492.pem (1338 bytes)
	I1105 19:11:59.159832   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/ssl/certs/154922.pem --> /usr/share/ca-certificates/154922.pem (1708 bytes)
	I1105 19:11:59.184470   73496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 19:11:59.207138   73496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 19:11:59.224379   73496 ssh_runner.go:195] Run: openssl version
	I1105 19:11:59.230142   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 19:11:59.243624   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248086   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.248157   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 19:11:59.253684   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 19:11:59.264169   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15492.pem && ln -fs /usr/share/ca-certificates/15492.pem /etc/ssl/certs/15492.pem"
	I1105 19:11:59.274837   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279102   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:53 /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.279159   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15492.pem
	I1105 19:11:59.284540   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15492.pem /etc/ssl/certs/51391683.0"
	I1105 19:11:59.295198   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154922.pem && ln -fs /usr/share/ca-certificates/154922.pem /etc/ssl/certs/154922.pem"
	I1105 19:11:59.306105   73496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310073   73496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:53 /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.310115   73496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154922.pem
	I1105 19:11:59.315240   73496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154922.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 19:11:59.325470   73496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 19:11:59.329485   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 19:11:59.334985   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 19:11:59.340316   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 19:11:59.345717   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 19:11:59.351082   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 19:11:59.356631   73496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 19:11:59.361951   73496 kubeadm.go:392] StartCluster: {Name:no-preload-459223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-459223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 19:11:59.362047   73496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 19:11:59.362084   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.398746   73496 cri.go:89] found id: ""
	I1105 19:11:59.398819   73496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 19:11:59.408597   73496 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 19:11:59.408614   73496 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 19:11:59.408656   73496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 19:11:59.418082   73496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 19:11:59.419128   73496 kubeconfig.go:125] found "no-preload-459223" server: "https://192.168.72.101:8443"
	I1105 19:11:59.421286   73496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 19:11:59.430458   73496 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.101
	I1105 19:11:59.430490   73496 kubeadm.go:1160] stopping kube-system containers ...
	I1105 19:11:59.430500   73496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1105 19:11:59.430549   73496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 19:11:59.464047   73496 cri.go:89] found id: ""
	I1105 19:11:59.464102   73496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 19:11:59.480978   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:11:59.490808   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:11:59.490829   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:11:59.490871   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:11:59.499505   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:11:59.499559   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:11:59.508247   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:11:59.516942   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:11:59.517005   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:11:59.525910   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.534349   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:11:59.534392   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:11:59.544212   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:11:59.553794   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:11:59.553857   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:11:59.562739   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:11:59.571819   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:59.680938   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.564659   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:11:58.034948   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:58.534748   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.034961   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:59.535634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.035311   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:00.534756   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.035266   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.535256   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.035489   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.534701   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:11:57.963075   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.462112   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.224544   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:02.225623   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.226711   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:00.775338   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.844402   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:00.957534   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:12:00.957630   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.458375   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.958215   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:01.975834   73496 api_server.go:72] duration metric: took 1.018298528s to wait for apiserver process to appear ...
	I1105 19:12:01.975862   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:12:01.975884   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.774116   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.774149   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.774164   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.825378   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 19:12:04.825427   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 19:12:04.976663   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:04.984209   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:04.984244   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.476825   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.484608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.484644   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:05.975985   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:05.981608   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 19:12:05.981639   73496 api_server.go:103] status: https://192.168.72.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 19:12:06.476014   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:12:06.480296   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:12:06.487584   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:12:06.487613   73496 api_server.go:131] duration metric: took 4.511744097s to wait for apiserver health ...
	I1105 19:12:06.487623   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:12:06.487632   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:12:06.489302   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:12:03.034795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:03.534764   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.034833   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:04.534795   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.034815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:05.534885   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:06.535327   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.035253   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:07.535011   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:02.961693   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:04.962003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:07.461125   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.724362   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:09.224191   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:06.490496   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:12:06.500809   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:12:06.529242   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:12:06.542769   73496 system_pods.go:59] 8 kube-system pods found
	I1105 19:12:06.542806   73496 system_pods.go:61] "coredns-7c65d6cfc9-9vvhj" [fde1a6e7-6807-440c-a38d-4f39ede6c11e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:12:06.542818   73496 system_pods.go:61] "etcd-no-preload-459223" [398e3fc3-6902-4cbb-bc50-a72bab461839] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 19:12:06.542828   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [33a306b0-a41d-4ca3-9d01-69faa7825fe8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 19:12:06.542837   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [865ae24c-d991-4650-9e17-7242f84403e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 19:12:06.542844   73496 system_pods.go:61] "kube-proxy-6h584" [dd35774f-a245-42af-8fe9-bd6933ad0e30] Running
	I1105 19:12:06.542852   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [27d3685e-d548-49b6-a24d-02b1f8656c66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 19:12:06.542859   73496 system_pods.go:61] "metrics-server-6867b74b74-5sp2j" [7ddaa66e-b4ba-4241-8dba-5fc6ab66d777] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:12:06.542864   73496 system_pods.go:61] "storage-provisioner" [49786ba3-e9fc-45ad-9418-fd3a0a7b652c] Running
	I1105 19:12:06.542873   73496 system_pods.go:74] duration metric: took 13.603868ms to wait for pod list to return data ...
	I1105 19:12:06.542883   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:12:06.549398   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:12:06.549425   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:12:06.549435   73496 node_conditions.go:105] duration metric: took 6.546615ms to run NodePressure ...
	I1105 19:12:06.549452   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 19:12:06.812829   73496 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818052   73496 kubeadm.go:739] kubelet initialised
	I1105 19:12:06.818082   73496 kubeadm.go:740] duration metric: took 5.227942ms waiting for restarted kubelet to initialise ...
	I1105 19:12:06.818093   73496 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:12:06.823883   73496 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.830129   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830164   73496 pod_ready.go:82] duration metric: took 6.253499ms for pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.830176   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "coredns-7c65d6cfc9-9vvhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.830187   73496 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.834901   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834942   73496 pod_ready.go:82] duration metric: took 4.743456ms for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.834954   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "etcd-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.834988   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.841446   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841474   73496 pod_ready.go:82] duration metric: took 6.472942ms for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.841485   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-apiserver-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.841494   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:06.933972   73496 pod_ready.go:98] node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.933998   73496 pod_ready.go:82] duration metric: took 92.493084ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	E1105 19:12:06.934006   73496 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-459223" hosting pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-459223" has status "Ready":"False"
	I1105 19:12:06.934012   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333443   73496 pod_ready.go:93] pod "kube-proxy-6h584" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:07.333473   73496 pod_ready.go:82] duration metric: took 399.45278ms for pod "kube-proxy-6h584" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:07.333486   73496 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:09.339907   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:08.035104   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:08.534784   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.035198   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.535319   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.035258   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:10.534634   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.035604   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:11.535077   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.035096   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:12.534812   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:09.961614   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.962113   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.724418   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.724954   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:11.839467   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.839725   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:13.035100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:13.534793   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.035120   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.535318   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.035062   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:15.535127   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.034840   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:16.534830   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.035105   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:17.534928   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:14.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.961398   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.224300   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.729666   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:16.339542   73496 pod_ready.go:103] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:17.840399   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:12:17.840424   73496 pod_ready.go:82] duration metric: took 10.506929493s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:17.840433   73496 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	I1105 19:12:19.846676   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:18.035126   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:18.535446   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.035154   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.535413   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.035580   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:20.534802   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.035030   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:21.535250   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.034785   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:22.534700   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:19.460480   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.461609   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.223496   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.224908   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:21.847279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:24.347279   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:23.034721   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.534672   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.035358   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:24.534813   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.035581   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:25.535342   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.034934   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:26.534766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.035389   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:27.534831   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:23.961556   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.460682   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:25.723807   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:27.724515   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:26.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.346351   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:28.035226   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:28.535577   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.034984   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:29.535633   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.035509   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:30.534907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.035372   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:31.535421   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.034719   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:32.534952   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:32.535067   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:32.575052   74485 cri.go:89] found id: ""
	I1105 19:12:32.575085   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.575096   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:32.575104   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:32.575164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:32.609969   74485 cri.go:89] found id: ""
	I1105 19:12:32.610003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.610011   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:32.610017   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:32.610065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:32.642343   74485 cri.go:89] found id: ""
	I1105 19:12:32.642369   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.642376   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:32.642381   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:32.642426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:28.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:30.960340   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:29.725101   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.224788   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:31.346559   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:33.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:32.680144   74485 cri.go:89] found id: ""
	I1105 19:12:32.680177   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.680188   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:32.680196   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:32.680270   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:32.715216   74485 cri.go:89] found id: ""
	I1105 19:12:32.715248   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.715259   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:32.715267   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:32.715321   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:32.751742   74485 cri.go:89] found id: ""
	I1105 19:12:32.751771   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.751795   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:32.751803   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:32.751865   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:32.786944   74485 cri.go:89] found id: ""
	I1105 19:12:32.787003   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.787015   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:32.787023   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:32.787080   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:32.820523   74485 cri.go:89] found id: ""
	I1105 19:12:32.820550   74485 logs.go:282] 0 containers: []
	W1105 19:12:32.820557   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:32.820565   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:32.820575   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:32.873960   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:32.874000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:32.889268   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:32.889296   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:33.011825   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:33.011846   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:33.011862   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:33.082785   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:33.082827   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:35.630678   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:35.644410   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:35.644492   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:35.679567   74485 cri.go:89] found id: ""
	I1105 19:12:35.679598   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.679607   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:35.679613   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:35.679666   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:35.713685   74485 cri.go:89] found id: ""
	I1105 19:12:35.713713   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.713721   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:35.713726   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:35.713789   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:35.749496   74485 cri.go:89] found id: ""
	I1105 19:12:35.749525   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.749536   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:35.749543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:35.749611   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:35.784228   74485 cri.go:89] found id: ""
	I1105 19:12:35.784254   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.784263   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:35.784269   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:35.784317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:35.818620   74485 cri.go:89] found id: ""
	I1105 19:12:35.818680   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.818696   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:35.818703   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:35.818769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:35.852525   74485 cri.go:89] found id: ""
	I1105 19:12:35.852554   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.852566   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:35.852574   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:35.852648   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:35.887906   74485 cri.go:89] found id: ""
	I1105 19:12:35.887931   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.887939   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:35.887944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:35.887994   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:35.920566   74485 cri.go:89] found id: ""
	I1105 19:12:35.920594   74485 logs.go:282] 0 containers: []
	W1105 19:12:35.920602   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:35.920612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:35.920627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:35.972706   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:35.972742   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:35.986114   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:35.986141   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:36.067016   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:36.067044   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:36.067060   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:36.158947   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:36.159003   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:32.962679   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.461449   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:37.462001   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:34.724028   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:36.724174   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.728373   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:35.848563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.347478   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:40.347899   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:38.700738   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:38.713280   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:38.713351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:38.747293   74485 cri.go:89] found id: ""
	I1105 19:12:38.747335   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.747347   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:38.747355   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:38.747414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:38.781607   74485 cri.go:89] found id: ""
	I1105 19:12:38.781635   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.781643   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:38.781648   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:38.781703   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:38.815303   74485 cri.go:89] found id: ""
	I1105 19:12:38.815333   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.815342   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:38.815348   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:38.815397   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:38.850128   74485 cri.go:89] found id: ""
	I1105 19:12:38.850156   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.850166   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:38.850174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:38.850233   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:38.882470   74485 cri.go:89] found id: ""
	I1105 19:12:38.882493   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.882500   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:38.882506   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:38.882563   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:38.914669   74485 cri.go:89] found id: ""
	I1105 19:12:38.914698   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.914706   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:38.914713   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:38.914762   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:38.946521   74485 cri.go:89] found id: ""
	I1105 19:12:38.946548   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.946556   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:38.946561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:38.946613   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:38.979628   74485 cri.go:89] found id: ""
	I1105 19:12:38.979655   74485 logs.go:282] 0 containers: []
	W1105 19:12:38.979663   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:38.979672   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:38.979682   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:39.056066   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:39.056102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.092303   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:39.092333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:39.143754   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:39.143790   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:39.156553   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:39.156587   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:39.220882   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:41.721766   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:41.734823   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:41.734893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:41.768636   74485 cri.go:89] found id: ""
	I1105 19:12:41.768668   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.768685   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:41.768693   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:41.768750   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:41.809506   74485 cri.go:89] found id: ""
	I1105 19:12:41.809533   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.809541   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:41.809546   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:41.809606   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:41.849953   74485 cri.go:89] found id: ""
	I1105 19:12:41.849977   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.849985   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:41.849991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:41.850037   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:41.893042   74485 cri.go:89] found id: ""
	I1105 19:12:41.893072   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.893084   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:41.893091   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:41.893152   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:41.936259   74485 cri.go:89] found id: ""
	I1105 19:12:41.936282   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.936292   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:41.936298   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:41.936347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:41.970322   74485 cri.go:89] found id: ""
	I1105 19:12:41.970344   74485 logs.go:282] 0 containers: []
	W1105 19:12:41.970353   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:41.970360   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:41.970427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:42.004351   74485 cri.go:89] found id: ""
	I1105 19:12:42.004375   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.004383   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:42.004388   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:42.004443   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:42.035136   74485 cri.go:89] found id: ""
	I1105 19:12:42.035163   74485 logs.go:282] 0 containers: []
	W1105 19:12:42.035174   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:42.035185   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:42.035201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:42.086760   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:42.086801   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:42.100795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:42.100829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:42.167480   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:42.167509   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:42.167529   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:42.248625   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:42.248664   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:39.961606   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.461423   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:41.224956   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:43.724906   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:42.846509   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.847235   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:44.785100   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:44.798182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:44.798248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:44.834080   74485 cri.go:89] found id: ""
	I1105 19:12:44.834107   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.834115   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:44.834120   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:44.834179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:44.870572   74485 cri.go:89] found id: ""
	I1105 19:12:44.870602   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.870613   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:44.870620   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:44.870691   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:44.908960   74485 cri.go:89] found id: ""
	I1105 19:12:44.908991   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.909002   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:44.909010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:44.909075   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:44.945310   74485 cri.go:89] found id: ""
	I1105 19:12:44.945342   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.945350   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:44.945355   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:44.945409   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:44.982893   74485 cri.go:89] found id: ""
	I1105 19:12:44.982935   74485 logs.go:282] 0 containers: []
	W1105 19:12:44.982946   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:44.982953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:44.983030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:45.015529   74485 cri.go:89] found id: ""
	I1105 19:12:45.015559   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.015571   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:45.015578   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:45.015640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:45.047252   74485 cri.go:89] found id: ""
	I1105 19:12:45.047284   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.047295   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:45.047302   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:45.047364   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:45.082963   74485 cri.go:89] found id: ""
	I1105 19:12:45.083009   74485 logs.go:282] 0 containers: []
	W1105 19:12:45.083018   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:45.083026   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:45.083039   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:45.131844   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:45.131881   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:45.145500   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:45.145530   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:45.214668   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:45.214709   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:45.214725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:45.291203   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:45.291243   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:44.963672   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.461610   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:46.223849   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:48.225352   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.346007   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:49.346691   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:47.831908   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:47.844873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:47.844957   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:47.881587   74485 cri.go:89] found id: ""
	I1105 19:12:47.881617   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.881628   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:47.881644   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:47.881714   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:47.918381   74485 cri.go:89] found id: ""
	I1105 19:12:47.918411   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.918423   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:47.918430   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:47.918491   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:47.950835   74485 cri.go:89] found id: ""
	I1105 19:12:47.950864   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.950880   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:47.950889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:47.950947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:47.985234   74485 cri.go:89] found id: ""
	I1105 19:12:47.985261   74485 logs.go:282] 0 containers: []
	W1105 19:12:47.985272   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:47.985279   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:47.985338   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:48.019406   74485 cri.go:89] found id: ""
	I1105 19:12:48.019437   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.019448   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:48.019455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:48.019532   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:48.053126   74485 cri.go:89] found id: ""
	I1105 19:12:48.053160   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.053172   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:48.053180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:48.053241   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:48.086847   74485 cri.go:89] found id: ""
	I1105 19:12:48.086872   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.086879   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:48.086885   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:48.086944   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:48.122366   74485 cri.go:89] found id: ""
	I1105 19:12:48.122388   74485 logs.go:282] 0 containers: []
	W1105 19:12:48.122396   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:48.122404   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:48.122421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:48.171579   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:48.171622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:48.185207   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:48.185234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:48.249553   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:48.249575   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:48.249586   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:48.323391   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:48.323427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:50.861939   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:50.874943   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:50.875041   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:50.911498   74485 cri.go:89] found id: ""
	I1105 19:12:50.911522   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.911530   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:50.911536   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:50.911591   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:50.946936   74485 cri.go:89] found id: ""
	I1105 19:12:50.946962   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.946988   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:50.947034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:50.947098   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:50.983220   74485 cri.go:89] found id: ""
	I1105 19:12:50.983246   74485 logs.go:282] 0 containers: []
	W1105 19:12:50.983258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:50.983265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:50.983314   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:51.017052   74485 cri.go:89] found id: ""
	I1105 19:12:51.017078   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.017086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:51.017092   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:51.017141   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:51.051417   74485 cri.go:89] found id: ""
	I1105 19:12:51.051448   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.051459   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:51.051466   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:51.051529   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:51.085129   74485 cri.go:89] found id: ""
	I1105 19:12:51.085164   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.085177   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:51.085182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:51.085232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:51.122065   74485 cri.go:89] found id: ""
	I1105 19:12:51.122100   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.122113   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:51.122120   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:51.122178   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:51.154909   74485 cri.go:89] found id: ""
	I1105 19:12:51.154938   74485 logs.go:282] 0 containers: []
	W1105 19:12:51.154946   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:51.154954   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:51.154966   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:51.167768   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:51.167798   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:51.231849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:51.231873   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:51.231897   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:51.314426   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:51.314487   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:51.356654   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:51.356685   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:49.961294   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.461707   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:50.723534   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:52.723821   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:51.347677   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.847328   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:53.911774   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:53.924884   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:53.924968   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:53.957690   74485 cri.go:89] found id: ""
	I1105 19:12:53.957719   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.957729   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:53.957737   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:53.957802   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:53.990717   74485 cri.go:89] found id: ""
	I1105 19:12:53.990744   74485 logs.go:282] 0 containers: []
	W1105 19:12:53.990751   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:53.990757   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:53.990803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:54.023229   74485 cri.go:89] found id: ""
	I1105 19:12:54.023251   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.023258   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:54.023263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:54.023320   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:54.056950   74485 cri.go:89] found id: ""
	I1105 19:12:54.056977   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.056987   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:54.056995   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:54.057056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:54.091729   74485 cri.go:89] found id: ""
	I1105 19:12:54.091756   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.091768   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:54.091776   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:54.091828   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:54.123964   74485 cri.go:89] found id: ""
	I1105 19:12:54.123991   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.124001   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:54.124009   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:54.124070   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:54.155164   74485 cri.go:89] found id: ""
	I1105 19:12:54.155195   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.155204   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:54.155209   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:54.155268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:54.188161   74485 cri.go:89] found id: ""
	I1105 19:12:54.188191   74485 logs.go:282] 0 containers: []
	W1105 19:12:54.188202   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:54.188213   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:54.188226   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:54.240906   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:54.240941   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:54.254061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:54.254093   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:54.321973   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:54.322007   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:54.322026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:54.405106   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:54.405147   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:56.941801   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:56.954658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:56.954741   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:12:56.990372   74485 cri.go:89] found id: ""
	I1105 19:12:56.990400   74485 logs.go:282] 0 containers: []
	W1105 19:12:56.990411   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:12:56.990419   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:12:56.990479   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:12:57.023047   74485 cri.go:89] found id: ""
	I1105 19:12:57.023082   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.023093   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:12:57.023102   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:12:57.023163   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:12:57.054991   74485 cri.go:89] found id: ""
	I1105 19:12:57.055021   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.055030   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:12:57.055036   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:12:57.055094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:12:57.086182   74485 cri.go:89] found id: ""
	I1105 19:12:57.086214   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.086225   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:12:57.086233   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:12:57.086295   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:12:57.120322   74485 cri.go:89] found id: ""
	I1105 19:12:57.120350   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.120361   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:12:57.120368   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:12:57.120431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:12:57.153751   74485 cri.go:89] found id: ""
	I1105 19:12:57.153781   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.153790   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:12:57.153796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:12:57.153845   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:12:57.189208   74485 cri.go:89] found id: ""
	I1105 19:12:57.189234   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.189244   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:12:57.189251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:12:57.189317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:12:57.223259   74485 cri.go:89] found id: ""
	I1105 19:12:57.223292   74485 logs.go:282] 0 containers: []
	W1105 19:12:57.223301   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:12:57.223308   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:12:57.223320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:12:57.273063   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:12:57.273098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:57.287759   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:12:57.287783   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:12:57.353387   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:12:57.353409   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:12:57.353421   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:12:57.426374   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:12:57.426411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:12:54.462191   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.960479   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:54.723926   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:56.724988   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.224704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:55.847609   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:58.347062   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.348243   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:12:59.965907   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:12:59.979081   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:12:59.979149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:00.010955   74485 cri.go:89] found id: ""
	I1105 19:13:00.011001   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.011012   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:00.011021   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:00.011081   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:00.044800   74485 cri.go:89] found id: ""
	I1105 19:13:00.044825   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.044832   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:00.044838   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:00.044894   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:00.082999   74485 cri.go:89] found id: ""
	I1105 19:13:00.083040   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.083050   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:00.083059   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:00.083125   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:00.120792   74485 cri.go:89] found id: ""
	I1105 19:13:00.120826   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.120835   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:00.120840   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:00.120903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:00.153156   74485 cri.go:89] found id: ""
	I1105 19:13:00.153188   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.153200   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:00.153207   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:00.153273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:00.189039   74485 cri.go:89] found id: ""
	I1105 19:13:00.189066   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.189073   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:00.189079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:00.189143   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:00.220904   74485 cri.go:89] found id: ""
	I1105 19:13:00.220932   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.220942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:00.220950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:00.221012   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:00.255414   74485 cri.go:89] found id: ""
	I1105 19:13:00.255443   74485 logs.go:282] 0 containers: []
	W1105 19:13:00.255454   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:00.255464   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:00.255480   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:00.329027   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:00.329050   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:00.329061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:00.405813   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:00.405847   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:00.443302   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:00.443332   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:00.498413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:00.498452   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:12:58.960870   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:00.962098   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:01.723865   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.724945   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:02.846369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:04.846751   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:03.011897   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:03.025351   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:03.025419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:03.058881   74485 cri.go:89] found id: ""
	I1105 19:13:03.058910   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.058920   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:03.058928   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:03.059018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:03.093549   74485 cri.go:89] found id: ""
	I1105 19:13:03.093580   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.093592   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:03.093600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:03.093660   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:03.132355   74485 cri.go:89] found id: ""
	I1105 19:13:03.132384   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.132395   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:03.132402   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:03.132463   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:03.164832   74485 cri.go:89] found id: ""
	I1105 19:13:03.164864   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.164875   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:03.164888   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:03.164947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:03.203187   74485 cri.go:89] found id: ""
	I1105 19:13:03.203213   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.203221   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:03.203226   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:03.203282   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:03.238867   74485 cri.go:89] found id: ""
	I1105 19:13:03.238899   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.238921   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:03.238928   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:03.239010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:03.276139   74485 cri.go:89] found id: ""
	I1105 19:13:03.276174   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.276187   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:03.276195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:03.276251   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:03.312588   74485 cri.go:89] found id: ""
	I1105 19:13:03.312613   74485 logs.go:282] 0 containers: []
	W1105 19:13:03.312631   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:03.312639   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:03.312650   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:03.379754   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:03.379782   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:03.379797   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:03.455719   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:03.455754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.493428   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:03.493458   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:03.545447   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:03.545481   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.060213   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:06.074756   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:06.074831   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:06.111392   74485 cri.go:89] found id: ""
	I1105 19:13:06.111421   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.111429   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:06.111435   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:06.111493   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:06.147535   74485 cri.go:89] found id: ""
	I1105 19:13:06.147568   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.147579   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:06.147585   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:06.147646   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:06.183176   74485 cri.go:89] found id: ""
	I1105 19:13:06.183198   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.183205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:06.183211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:06.183262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:06.213957   74485 cri.go:89] found id: ""
	I1105 19:13:06.213983   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.213992   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:06.213997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:06.214060   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:06.251199   74485 cri.go:89] found id: ""
	I1105 19:13:06.251227   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.251234   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:06.251240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:06.251297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:06.288128   74485 cri.go:89] found id: ""
	I1105 19:13:06.288157   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.288167   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:06.288174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:06.288236   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:06.325265   74485 cri.go:89] found id: ""
	I1105 19:13:06.325296   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.325306   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:06.325314   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:06.325375   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:06.359649   74485 cri.go:89] found id: ""
	I1105 19:13:06.359689   74485 logs.go:282] 0 containers: []
	W1105 19:13:06.359700   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:06.359710   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:06.359725   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:06.408423   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:06.408456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:06.421776   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:06.421804   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:06.487464   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:06.487493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:06.487507   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:06.565789   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:06.565829   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:03.461192   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:05.725002   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:08.225146   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:07.346498   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.347264   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:09.104578   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:09.117930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:09.118022   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:09.156055   74485 cri.go:89] found id: ""
	I1105 19:13:09.156083   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.156093   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:09.156101   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:09.156161   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:09.190470   74485 cri.go:89] found id: ""
	I1105 19:13:09.190499   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.190509   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:09.190516   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:09.190576   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:09.222568   74485 cri.go:89] found id: ""
	I1105 19:13:09.222595   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.222606   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:09.222612   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:09.222677   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:09.260251   74485 cri.go:89] found id: ""
	I1105 19:13:09.260282   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.260292   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:09.260300   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:09.260362   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:09.296006   74485 cri.go:89] found id: ""
	I1105 19:13:09.296036   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.296047   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:09.296054   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:09.296118   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:09.331213   74485 cri.go:89] found id: ""
	I1105 19:13:09.331246   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.331257   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:09.331265   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:09.331333   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:09.364286   74485 cri.go:89] found id: ""
	I1105 19:13:09.364316   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.364327   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:09.364335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:09.364445   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:09.398060   74485 cri.go:89] found id: ""
	I1105 19:13:09.398084   74485 logs.go:282] 0 containers: []
	W1105 19:13:09.398092   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:09.398101   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:09.398113   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:09.447373   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:09.447409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:09.461483   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:09.461514   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:09.528213   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:09.528236   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:09.528248   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:09.607397   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:09.607430   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.146158   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:12.159183   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:12.159262   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:12.193917   74485 cri.go:89] found id: ""
	I1105 19:13:12.193952   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.193963   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:12.193971   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:12.194036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:12.226558   74485 cri.go:89] found id: ""
	I1105 19:13:12.226585   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.226594   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:12.226600   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:12.226662   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:12.258437   74485 cri.go:89] found id: ""
	I1105 19:13:12.258469   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.258481   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:12.258488   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:12.258557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:12.291308   74485 cri.go:89] found id: ""
	I1105 19:13:12.291341   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.291353   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:12.291361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:12.291431   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:12.325768   74485 cri.go:89] found id: ""
	I1105 19:13:12.325801   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.325812   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:12.325819   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:12.325884   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:12.361077   74485 cri.go:89] found id: ""
	I1105 19:13:12.361100   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.361108   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:12.361118   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:12.361179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:12.394769   74485 cri.go:89] found id: ""
	I1105 19:13:12.394791   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.394800   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:12.394806   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:12.394864   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:12.430138   74485 cri.go:89] found id: ""
	I1105 19:13:12.430167   74485 logs.go:282] 0 containers: []
	W1105 19:13:12.430177   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:12.430189   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:12.430200   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:12.472596   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:12.472637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:12.523107   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:12.523143   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:12.535797   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:12.535824   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:12.604088   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:12.604108   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:12.604123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:08.460647   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.462830   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:10.225468   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:12.225693   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:11.849320   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.347487   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:15.185725   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:15.200158   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:15.200238   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:15.238309   74485 cri.go:89] found id: ""
	I1105 19:13:15.238334   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.238342   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:15.238349   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:15.238404   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:15.272897   74485 cri.go:89] found id: ""
	I1105 19:13:15.272927   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.272938   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:15.272945   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:15.273013   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:15.307700   74485 cri.go:89] found id: ""
	I1105 19:13:15.307726   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.307737   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:15.307744   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:15.307810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:15.340156   74485 cri.go:89] found id: ""
	I1105 19:13:15.340182   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.340196   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:15.340202   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:15.340252   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:15.375930   74485 cri.go:89] found id: ""
	I1105 19:13:15.375963   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.375971   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:15.375976   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:15.376031   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:15.409876   74485 cri.go:89] found id: ""
	I1105 19:13:15.409905   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.409915   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:15.409922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:15.409984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:15.442781   74485 cri.go:89] found id: ""
	I1105 19:13:15.442808   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.442819   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:15.442825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:15.442896   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:15.480578   74485 cri.go:89] found id: ""
	I1105 19:13:15.480606   74485 logs.go:282] 0 containers: []
	W1105 19:13:15.480614   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:15.480623   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:15.480634   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:15.530910   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:15.530952   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:15.544351   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:15.544382   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:15.618345   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:15.618373   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:15.618396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:15.704408   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:15.704451   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:14.961408   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.961486   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:14.724130   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.724204   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.724704   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:16.347818   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.846423   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:18.244882   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:18.258667   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:18.258758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:18.292140   74485 cri.go:89] found id: ""
	I1105 19:13:18.292163   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.292171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:18.292178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:18.292235   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:18.324954   74485 cri.go:89] found id: ""
	I1105 19:13:18.324979   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.324985   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:18.324991   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:18.325048   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:18.361943   74485 cri.go:89] found id: ""
	I1105 19:13:18.361972   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.361983   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:18.361991   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:18.362062   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:18.396012   74485 cri.go:89] found id: ""
	I1105 19:13:18.396036   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.396044   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:18.396050   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:18.396097   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:18.428852   74485 cri.go:89] found id: ""
	I1105 19:13:18.428875   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.428883   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:18.428889   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:18.428946   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:18.464364   74485 cri.go:89] found id: ""
	I1105 19:13:18.464390   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.464397   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:18.464404   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:18.464464   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:18.496478   74485 cri.go:89] found id: ""
	I1105 19:13:18.496505   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.496514   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:18.496519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:18.496577   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:18.530313   74485 cri.go:89] found id: ""
	I1105 19:13:18.530339   74485 logs.go:282] 0 containers: []
	W1105 19:13:18.530348   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:18.530356   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:18.530368   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:18.582593   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:18.582627   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:18.596580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:18.596616   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:18.663920   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:18.663959   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:18.663974   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:18.740706   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:18.740746   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.281614   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:21.295841   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:21.295919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:21.330832   74485 cri.go:89] found id: ""
	I1105 19:13:21.330856   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.330864   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:21.330869   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:21.330922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:21.365228   74485 cri.go:89] found id: ""
	I1105 19:13:21.365257   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.365265   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:21.365269   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:21.365317   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:21.418675   74485 cri.go:89] found id: ""
	I1105 19:13:21.418702   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.418719   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:21.418727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:21.418793   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:21.453966   74485 cri.go:89] found id: ""
	I1105 19:13:21.453994   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.454003   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:21.454008   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:21.454058   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:21.492030   74485 cri.go:89] found id: ""
	I1105 19:13:21.492056   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.492067   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:21.492078   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:21.492128   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:21.529146   74485 cri.go:89] found id: ""
	I1105 19:13:21.529174   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.529183   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:21.529190   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:21.529250   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:21.566491   74485 cri.go:89] found id: ""
	I1105 19:13:21.566519   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.566528   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:21.566533   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:21.566595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:21.605720   74485 cri.go:89] found id: ""
	I1105 19:13:21.605745   74485 logs.go:282] 0 containers: []
	W1105 19:13:21.605754   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:21.605762   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:21.605772   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:21.682385   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:21.682408   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:21.682420   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:21.764519   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:21.764557   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:21.805090   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:21.805117   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:21.857560   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:21.857593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:19.462045   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.961995   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:21.224702   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.226864   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:20.850915   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:23.346819   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.347230   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:24.371420   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:24.384566   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:24.384634   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:24.416283   74485 cri.go:89] found id: ""
	I1105 19:13:24.416308   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.416319   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:24.416327   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:24.416388   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:24.452875   74485 cri.go:89] found id: ""
	I1105 19:13:24.452899   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.452907   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:24.452913   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:24.452964   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:24.489946   74485 cri.go:89] found id: ""
	I1105 19:13:24.489974   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.489992   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:24.490000   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:24.490056   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:24.527348   74485 cri.go:89] found id: ""
	I1105 19:13:24.527377   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.527388   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:24.527395   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:24.527451   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:24.558992   74485 cri.go:89] found id: ""
	I1105 19:13:24.559024   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.559035   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:24.559047   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:24.559105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:24.591405   74485 cri.go:89] found id: ""
	I1105 19:13:24.591437   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.591448   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:24.591455   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:24.591516   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.625002   74485 cri.go:89] found id: ""
	I1105 19:13:24.625031   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.625040   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:24.625048   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:24.625114   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:24.657867   74485 cri.go:89] found id: ""
	I1105 19:13:24.657896   74485 logs.go:282] 0 containers: []
	W1105 19:13:24.657907   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:24.657918   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:24.657931   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:24.708444   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:24.708482   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:24.721771   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:24.721814   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:24.793946   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:24.793980   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:24.793996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:24.875130   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:24.875167   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:27.412872   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:27.426996   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:27.427072   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:27.462434   74485 cri.go:89] found id: ""
	I1105 19:13:27.462458   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.462468   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:27.462475   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:27.462536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:27.496916   74485 cri.go:89] found id: ""
	I1105 19:13:27.496951   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.496962   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:27.496969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:27.497035   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:27.528826   74485 cri.go:89] found id: ""
	I1105 19:13:27.528853   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.528861   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:27.528867   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:27.528919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:27.563164   74485 cri.go:89] found id: ""
	I1105 19:13:27.563193   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.563204   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:27.563210   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:27.563284   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:27.600136   74485 cri.go:89] found id: ""
	I1105 19:13:27.600164   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.600174   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:27.600180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:27.600247   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:27.634326   74485 cri.go:89] found id: ""
	I1105 19:13:27.634358   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.634368   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:27.634377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:27.634452   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:24.462295   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:26.961567   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:25.723935   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.725498   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.847362   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.349542   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:27.668154   74485 cri.go:89] found id: ""
	I1105 19:13:27.668185   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.668196   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:27.668203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:27.668263   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:27.706016   74485 cri.go:89] found id: ""
	I1105 19:13:27.706043   74485 logs.go:282] 0 containers: []
	W1105 19:13:27.706051   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:27.706059   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:27.706071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:27.755890   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:27.755929   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:27.773038   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:27.773063   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:27.863392   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:27.863414   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:27.863429   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:27.949149   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:27.949185   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.489333   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:30.502794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:30.502878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:30.536263   74485 cri.go:89] found id: ""
	I1105 19:13:30.536289   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.536297   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:30.536302   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:30.536347   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:30.570418   74485 cri.go:89] found id: ""
	I1105 19:13:30.570445   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.570455   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:30.570462   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:30.570523   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:30.601972   74485 cri.go:89] found id: ""
	I1105 19:13:30.602003   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.602013   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:30.602020   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:30.602086   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:30.634151   74485 cri.go:89] found id: ""
	I1105 19:13:30.634183   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.634195   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:30.634203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:30.634265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:30.666384   74485 cri.go:89] found id: ""
	I1105 19:13:30.666415   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.666425   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:30.666433   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:30.666498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:30.699587   74485 cri.go:89] found id: ""
	I1105 19:13:30.699619   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.699631   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:30.699639   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:30.699699   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:30.731917   74485 cri.go:89] found id: ""
	I1105 19:13:30.731972   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.731983   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:30.731990   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:30.732051   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:30.768807   74485 cri.go:89] found id: ""
	I1105 19:13:30.768832   74485 logs.go:282] 0 containers: []
	W1105 19:13:30.768840   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:30.768849   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:30.768860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:30.848594   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:30.848626   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:30.889031   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:30.889067   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:30.940550   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:30.940588   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:30.953810   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:30.953845   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:31.023633   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:29.461686   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:31.961484   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:30.225024   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.723965   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:32.847298   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:35.347135   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:33.524150   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:33.539025   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:33.539112   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:33.584756   74485 cri.go:89] found id: ""
	I1105 19:13:33.584786   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.584799   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:33.584807   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:33.584869   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:33.624785   74485 cri.go:89] found id: ""
	I1105 19:13:33.624816   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.624829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:33.624836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:33.625025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:33.668750   74485 cri.go:89] found id: ""
	I1105 19:13:33.668783   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.668794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:33.668804   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:33.668867   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:33.701675   74485 cri.go:89] found id: ""
	I1105 19:13:33.701707   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.701735   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:33.701743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:33.701817   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:33.737368   74485 cri.go:89] found id: ""
	I1105 19:13:33.737393   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.737401   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:33.737407   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:33.737458   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:33.770589   74485 cri.go:89] found id: ""
	I1105 19:13:33.770620   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.770630   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:33.770638   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:33.770704   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:33.802635   74485 cri.go:89] found id: ""
	I1105 19:13:33.802668   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.802680   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:33.802687   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:33.802751   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:33.839274   74485 cri.go:89] found id: ""
	I1105 19:13:33.839301   74485 logs.go:282] 0 containers: []
	W1105 19:13:33.839309   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:33.839317   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:33.839328   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:33.881049   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:33.881090   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:33.932704   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:33.932743   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:33.945979   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:33.946007   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:34.017355   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:34.017375   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:34.017390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:36.596284   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:36.608240   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:36.608306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:36.641846   74485 cri.go:89] found id: ""
	I1105 19:13:36.641878   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.641887   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:36.641901   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:36.641966   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:36.676553   74485 cri.go:89] found id: ""
	I1105 19:13:36.676584   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.676595   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:36.676602   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:36.676669   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:36.711931   74485 cri.go:89] found id: ""
	I1105 19:13:36.711961   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.711972   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:36.711980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:36.712042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:36.748510   74485 cri.go:89] found id: ""
	I1105 19:13:36.748534   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.748542   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:36.748547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:36.748596   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:36.781869   74485 cri.go:89] found id: ""
	I1105 19:13:36.781899   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.781912   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:36.781922   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:36.781983   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:36.816574   74485 cri.go:89] found id: ""
	I1105 19:13:36.816597   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.816605   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:36.816610   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:36.816658   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:36.852894   74485 cri.go:89] found id: ""
	I1105 19:13:36.852921   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.852928   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:36.852934   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:36.852996   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:36.891732   74485 cri.go:89] found id: ""
	I1105 19:13:36.891764   74485 logs.go:282] 0 containers: []
	W1105 19:13:36.891783   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:36.891795   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:36.891810   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:36.964948   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:36.964972   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:36.964987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:37.043727   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:37.043765   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:37.084306   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:37.084333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:37.133238   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:37.133274   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:34.461773   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:36.960440   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:34.724805   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.224830   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.227912   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:37.347383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.347770   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:39.647492   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:39.659944   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:39.660025   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:39.695382   74485 cri.go:89] found id: ""
	I1105 19:13:39.695405   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.695415   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:39.695422   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:39.695480   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:39.731807   74485 cri.go:89] found id: ""
	I1105 19:13:39.731833   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.731841   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:39.731846   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:39.731895   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:39.766913   74485 cri.go:89] found id: ""
	I1105 19:13:39.766945   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.766955   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:39.766963   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:39.767049   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:39.800265   74485 cri.go:89] found id: ""
	I1105 19:13:39.800288   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.800296   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:39.800301   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:39.800346   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:39.832753   74485 cri.go:89] found id: ""
	I1105 19:13:39.832781   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.832789   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:39.832794   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:39.832843   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:39.865950   74485 cri.go:89] found id: ""
	I1105 19:13:39.865980   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.865990   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:39.865997   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:39.866046   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:39.902918   74485 cri.go:89] found id: ""
	I1105 19:13:39.902948   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.902957   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:39.902962   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:39.903039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:39.935086   74485 cri.go:89] found id: ""
	I1105 19:13:39.935117   74485 logs.go:282] 0 containers: []
	W1105 19:13:39.935129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:39.935139   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:39.935152   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:39.997935   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:39.997961   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:39.997976   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:40.076794   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:40.076852   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:40.114178   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:40.114209   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:40.163512   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:40.163550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:38.961003   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:40.962241   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.724237   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:43.725317   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:41.847149   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:44.346097   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:42.676843   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:42.689855   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:42.689930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:42.724108   74485 cri.go:89] found id: ""
	I1105 19:13:42.724139   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.724148   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:42.724156   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:42.724218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:42.760816   74485 cri.go:89] found id: ""
	I1105 19:13:42.760844   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.760854   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:42.760861   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:42.760924   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:42.795111   74485 cri.go:89] found id: ""
	I1105 19:13:42.795134   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.795142   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:42.795147   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:42.795195   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:42.832964   74485 cri.go:89] found id: ""
	I1105 19:13:42.832988   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.832997   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:42.833003   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:42.833065   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:42.868817   74485 cri.go:89] found id: ""
	I1105 19:13:42.868848   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.868858   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:42.868865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:42.868933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:42.902015   74485 cri.go:89] found id: ""
	I1105 19:13:42.902044   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.902051   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:42.902056   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:42.902146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:42.934298   74485 cri.go:89] found id: ""
	I1105 19:13:42.934322   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.934330   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:42.934335   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:42.934385   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:42.969804   74485 cri.go:89] found id: ""
	I1105 19:13:42.969831   74485 logs.go:282] 0 containers: []
	W1105 19:13:42.969843   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:42.969854   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:42.969873   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:43.019922   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:43.019959   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:43.033594   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:43.033622   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:43.108220   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:43.108240   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:43.108251   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:43.191946   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:43.191987   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:45.730728   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:45.743344   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:45.743419   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:45.777693   74485 cri.go:89] found id: ""
	I1105 19:13:45.777728   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.777739   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:45.777747   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:45.777810   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:45.810195   74485 cri.go:89] found id: ""
	I1105 19:13:45.810222   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.810233   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:45.810240   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:45.810308   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:45.851210   74485 cri.go:89] found id: ""
	I1105 19:13:45.851240   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.851247   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:45.851252   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:45.851311   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:45.885501   74485 cri.go:89] found id: ""
	I1105 19:13:45.885531   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.885540   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:45.885546   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:45.885595   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:45.921638   74485 cri.go:89] found id: ""
	I1105 19:13:45.921667   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.921676   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:45.921684   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:45.921745   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:45.954341   74485 cri.go:89] found id: ""
	I1105 19:13:45.954373   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.954384   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:45.954394   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:45.954461   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:45.988840   74485 cri.go:89] found id: ""
	I1105 19:13:45.988865   74485 logs.go:282] 0 containers: []
	W1105 19:13:45.988873   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:45.988879   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:45.988949   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:46.025409   74485 cri.go:89] found id: ""
	I1105 19:13:46.025441   74485 logs.go:282] 0 containers: []
	W1105 19:13:46.025458   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:46.025470   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:46.025486   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:46.037763   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:46.037787   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:46.112619   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:46.112663   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:46.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:46.192165   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:46.192199   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:46.233235   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:46.233263   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:42.962569   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:45.461256   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:47.461781   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.225004   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.723774   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:46.346687   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.848011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:48.787685   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:48.800681   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:48.800749   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:48.835344   74485 cri.go:89] found id: ""
	I1105 19:13:48.835366   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.835374   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:48.835383   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:48.835429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:48.867447   74485 cri.go:89] found id: ""
	I1105 19:13:48.867474   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.867483   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:48.867488   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:48.867536   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:48.899135   74485 cri.go:89] found id: ""
	I1105 19:13:48.899160   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.899167   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:48.899172   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:48.899221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:48.932208   74485 cri.go:89] found id: ""
	I1105 19:13:48.932243   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.932255   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:48.932263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:48.932326   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:48.967174   74485 cri.go:89] found id: ""
	I1105 19:13:48.967202   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.967210   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:48.967215   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:48.967267   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:48.998902   74485 cri.go:89] found id: ""
	I1105 19:13:48.998932   74485 logs.go:282] 0 containers: []
	W1105 19:13:48.998942   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:48.998950   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:48.999030   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:49.030946   74485 cri.go:89] found id: ""
	I1105 19:13:49.030988   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.030999   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:49.031006   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:49.031074   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:49.063489   74485 cri.go:89] found id: ""
	I1105 19:13:49.063517   74485 logs.go:282] 0 containers: []
	W1105 19:13:49.063528   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:49.063540   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:49.063555   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:49.116433   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:49.116477   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:49.131439   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:49.131476   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:49.199770   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:49.199795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:49.199809   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:49.275503   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:49.275543   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:51.816208   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:51.829328   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:51.829399   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:51.863320   74485 cri.go:89] found id: ""
	I1105 19:13:51.863346   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.863354   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:51.863359   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:51.863406   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:51.896589   74485 cri.go:89] found id: ""
	I1105 19:13:51.896618   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.896628   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:51.896635   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:51.896697   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:51.933744   74485 cri.go:89] found id: ""
	I1105 19:13:51.933769   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.933776   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:51.933781   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:51.933829   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:51.970806   74485 cri.go:89] found id: ""
	I1105 19:13:51.970829   74485 logs.go:282] 0 containers: []
	W1105 19:13:51.970836   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:51.970842   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:51.970889   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:52.004087   74485 cri.go:89] found id: ""
	I1105 19:13:52.004116   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.004124   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:52.004129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:52.004186   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:52.041721   74485 cri.go:89] found id: ""
	I1105 19:13:52.041752   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.041763   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:52.041771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:52.041835   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:52.079253   74485 cri.go:89] found id: ""
	I1105 19:13:52.079277   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.079285   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:52.079292   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:52.079351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:52.112604   74485 cri.go:89] found id: ""
	I1105 19:13:52.112642   74485 logs.go:282] 0 containers: []
	W1105 19:13:52.112653   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:52.112664   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:52.112679   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:52.160799   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:52.160841   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:52.174323   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:52.174355   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:52.247358   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:52.247383   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:52.247395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:52.326071   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:52.326108   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:49.961938   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.461239   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.724514   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:52.724742   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:50.848418   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:53.346329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.347199   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:54.866454   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:54.879015   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:54.879093   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:54.911729   74485 cri.go:89] found id: ""
	I1105 19:13:54.911765   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.911777   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:54.911785   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:54.911846   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:54.943137   74485 cri.go:89] found id: ""
	I1105 19:13:54.943169   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.943185   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:54.943193   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:54.943253   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:54.977951   74485 cri.go:89] found id: ""
	I1105 19:13:54.977980   74485 logs.go:282] 0 containers: []
	W1105 19:13:54.977991   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:54.977998   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:54.978061   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:55.009453   74485 cri.go:89] found id: ""
	I1105 19:13:55.009478   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.009486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:55.009491   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:55.009537   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:55.040790   74485 cri.go:89] found id: ""
	I1105 19:13:55.040814   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.040821   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:55.040827   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:55.040878   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:55.073401   74485 cri.go:89] found id: ""
	I1105 19:13:55.073430   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.073441   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:55.073449   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:55.073508   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:55.105419   74485 cri.go:89] found id: ""
	I1105 19:13:55.105443   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.105451   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:55.105456   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:55.105511   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:55.137363   74485 cri.go:89] found id: ""
	I1105 19:13:55.137395   74485 logs.go:282] 0 containers: []
	W1105 19:13:55.137406   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:55.137416   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:55.137431   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:13:55.174176   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:55.174201   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:55.221658   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:55.221693   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:55.235044   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:55.235070   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:55.308192   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:55.308218   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:55.308234   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:54.461424   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:56.961198   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:55.223920   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.224915   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.847329   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:00.347371   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:57.892462   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:13:57.905472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:13:57.905543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:13:57.946044   74485 cri.go:89] found id: ""
	I1105 19:13:57.946071   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.946081   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:13:57.946089   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:13:57.946149   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:13:57.980762   74485 cri.go:89] found id: ""
	I1105 19:13:57.980791   74485 logs.go:282] 0 containers: []
	W1105 19:13:57.980803   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:13:57.980811   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:13:57.980874   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:13:58.013351   74485 cri.go:89] found id: ""
	I1105 19:13:58.013374   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.013381   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:13:58.013386   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:13:58.013433   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:13:58.049056   74485 cri.go:89] found id: ""
	I1105 19:13:58.049083   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.049091   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:13:58.049097   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:13:58.049147   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:13:58.081476   74485 cri.go:89] found id: ""
	I1105 19:13:58.081507   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.081517   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:13:58.081524   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:13:58.081583   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:13:58.114526   74485 cri.go:89] found id: ""
	I1105 19:13:58.114554   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.114564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:13:58.114571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:13:58.114630   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:13:58.148219   74485 cri.go:89] found id: ""
	I1105 19:13:58.148243   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.148252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:13:58.148257   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:13:58.148312   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:13:58.183254   74485 cri.go:89] found id: ""
	I1105 19:13:58.183277   74485 logs.go:282] 0 containers: []
	W1105 19:13:58.183285   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:13:58.183292   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:13:58.183304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:58.234747   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:13:58.234785   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:13:58.248269   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:13:58.248300   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:13:58.313290   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:13:58.313312   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:13:58.313327   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:13:58.389847   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:13:58.389889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:00.927957   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:00.941525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:00.941593   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:00.974891   74485 cri.go:89] found id: ""
	I1105 19:14:00.974920   74485 logs.go:282] 0 containers: []
	W1105 19:14:00.974931   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:00.974938   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:00.975018   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:01.008224   74485 cri.go:89] found id: ""
	I1105 19:14:01.008250   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.008262   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:01.008270   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:01.008328   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:01.044514   74485 cri.go:89] found id: ""
	I1105 19:14:01.044545   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.044553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:01.044559   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:01.044614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:01.077091   74485 cri.go:89] found id: ""
	I1105 19:14:01.077124   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.077135   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:01.077141   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:01.077197   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:01.109947   74485 cri.go:89] found id: ""
	I1105 19:14:01.109976   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.109986   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:01.109994   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:01.110054   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:01.146162   74485 cri.go:89] found id: ""
	I1105 19:14:01.146193   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.146203   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:01.146211   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:01.146275   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:01.180335   74485 cri.go:89] found id: ""
	I1105 19:14:01.180360   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.180370   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:01.180377   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:01.180436   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:01.216160   74485 cri.go:89] found id: ""
	I1105 19:14:01.216189   74485 logs.go:282] 0 containers: []
	W1105 19:14:01.216199   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:01.216221   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:01.216236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:01.229426   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:01.229455   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:01.298847   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:01.298874   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:01.298889   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:01.375255   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:01.375299   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:01.417946   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:01.418026   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:13:59.461014   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.961362   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:13:59.724103   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:01.724976   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.725344   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:02.349032   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:04.847734   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:03.973713   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:03.987128   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:03.987198   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:04.020050   74485 cri.go:89] found id: ""
	I1105 19:14:04.020081   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.020091   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:04.020098   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:04.020164   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:04.053458   74485 cri.go:89] found id: ""
	I1105 19:14:04.053485   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.053492   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:04.053498   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:04.053544   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:04.086417   74485 cri.go:89] found id: ""
	I1105 19:14:04.086442   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.086455   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:04.086461   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:04.086513   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:04.122035   74485 cri.go:89] found id: ""
	I1105 19:14:04.122059   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.122067   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:04.122073   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:04.122120   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:04.158732   74485 cri.go:89] found id: ""
	I1105 19:14:04.158758   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.158765   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:04.158771   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:04.158822   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:04.190497   74485 cri.go:89] found id: ""
	I1105 19:14:04.190525   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.190536   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:04.190543   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:04.190604   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:04.222040   74485 cri.go:89] found id: ""
	I1105 19:14:04.222066   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.222074   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:04.222079   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:04.222131   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:04.258753   74485 cri.go:89] found id: ""
	I1105 19:14:04.258781   74485 logs.go:282] 0 containers: []
	W1105 19:14:04.258793   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:04.258804   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:04.258819   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:04.299966   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:04.300052   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:04.355364   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:04.355395   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:04.368954   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:04.368980   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:04.431658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:04.431688   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:04.431700   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.015289   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:07.029580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:07.029644   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:07.066931   74485 cri.go:89] found id: ""
	I1105 19:14:07.066964   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.066993   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:07.067004   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:07.067059   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:07.104315   74485 cri.go:89] found id: ""
	I1105 19:14:07.104341   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.104349   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:07.104354   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:07.104401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:07.141271   74485 cri.go:89] found id: ""
	I1105 19:14:07.141298   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.141305   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:07.141311   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:07.141360   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:07.174600   74485 cri.go:89] found id: ""
	I1105 19:14:07.174631   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.174643   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:07.174653   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:07.174707   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:07.211920   74485 cri.go:89] found id: ""
	I1105 19:14:07.211958   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.211969   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:07.211975   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:07.212027   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:07.248238   74485 cri.go:89] found id: ""
	I1105 19:14:07.248269   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.248280   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:07.248286   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:07.248344   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:07.279833   74485 cri.go:89] found id: ""
	I1105 19:14:07.279864   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.279874   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:07.279881   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:07.279931   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:07.317411   74485 cri.go:89] found id: ""
	I1105 19:14:07.317441   74485 logs.go:282] 0 containers: []
	W1105 19:14:07.317452   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:07.317461   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:07.317474   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:07.390499   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:07.390535   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:07.390556   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:07.488858   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:07.488895   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:07.528612   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:07.528645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:07.581884   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:07.581927   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:03.961433   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.460953   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:06.223402   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:08.723797   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:07.348258   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:09.846465   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.096089   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:10.110828   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:10.110898   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:10.147299   74485 cri.go:89] found id: ""
	I1105 19:14:10.147332   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.147344   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:10.147350   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:10.147401   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:10.181457   74485 cri.go:89] found id: ""
	I1105 19:14:10.181482   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.181489   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:10.181495   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:10.181540   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:10.215210   74485 cri.go:89] found id: ""
	I1105 19:14:10.215241   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.215252   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:10.215259   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:10.215319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:10.249587   74485 cri.go:89] found id: ""
	I1105 19:14:10.249609   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.249617   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:10.249625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:10.249679   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:10.282566   74485 cri.go:89] found id: ""
	I1105 19:14:10.282591   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.282598   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:10.282604   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:10.282672   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:10.314312   74485 cri.go:89] found id: ""
	I1105 19:14:10.314344   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.314355   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:10.314361   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:10.314415   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:10.346988   74485 cri.go:89] found id: ""
	I1105 19:14:10.347016   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.347028   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:10.347035   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:10.347088   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:10.381326   74485 cri.go:89] found id: ""
	I1105 19:14:10.381354   74485 logs.go:282] 0 containers: []
	W1105 19:14:10.381370   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:10.381380   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:10.381394   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:10.418311   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:10.418344   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:10.469559   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:10.469590   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:10.482394   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:10.482427   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:10.551831   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:10.551854   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:10.551870   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:08.462072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.961478   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:10.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:12.724974   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:11.846737   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:14.346050   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:13.127576   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:13.143182   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:13.143242   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:13.188794   74485 cri.go:89] found id: ""
	I1105 19:14:13.188827   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.188839   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:13.188846   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:13.188897   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:13.221790   74485 cri.go:89] found id: ""
	I1105 19:14:13.221818   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.221829   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:13.221836   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:13.221893   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:13.255164   74485 cri.go:89] found id: ""
	I1105 19:14:13.255194   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.255205   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:13.255212   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:13.255272   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:13.288203   74485 cri.go:89] found id: ""
	I1105 19:14:13.288231   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.288241   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:13.288249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:13.288307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:13.321438   74485 cri.go:89] found id: ""
	I1105 19:14:13.321463   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.321475   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:13.321482   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:13.321541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:13.361858   74485 cri.go:89] found id: ""
	I1105 19:14:13.361886   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.361897   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:13.361905   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:13.361979   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:13.394210   74485 cri.go:89] found id: ""
	I1105 19:14:13.394239   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.394252   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:13.394260   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:13.394324   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:13.434665   74485 cri.go:89] found id: ""
	I1105 19:14:13.434697   74485 logs.go:282] 0 containers: []
	W1105 19:14:13.434705   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:13.434712   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:13.434724   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:13.447849   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:13.447875   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:13.514353   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:13.514377   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:13.514390   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:13.590746   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:13.590784   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:13.627704   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:13.627732   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:16.180171   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:16.193282   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:16.193342   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:16.230087   74485 cri.go:89] found id: ""
	I1105 19:14:16.230118   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.230128   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:16.230137   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:16.230200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:16.264315   74485 cri.go:89] found id: ""
	I1105 19:14:16.264348   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.264360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:16.264368   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:16.264429   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:16.298197   74485 cri.go:89] found id: ""
	I1105 19:14:16.298231   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.298243   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:16.298251   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:16.298316   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:16.333149   74485 cri.go:89] found id: ""
	I1105 19:14:16.333180   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.333193   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:16.333203   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:16.333268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:16.366863   74485 cri.go:89] found id: ""
	I1105 19:14:16.366887   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.366895   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:16.366900   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:16.366947   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:16.400434   74485 cri.go:89] found id: ""
	I1105 19:14:16.400458   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.400466   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:16.400472   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:16.400524   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:16.435475   74485 cri.go:89] found id: ""
	I1105 19:14:16.435497   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.435504   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:16.435510   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:16.435560   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:16.470577   74485 cri.go:89] found id: ""
	I1105 19:14:16.470604   74485 logs.go:282] 0 containers: []
	W1105 19:14:16.470612   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:16.470620   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:16.470632   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:16.483061   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:16.483094   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:16.550662   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:16.550690   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:16.550702   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:16.629372   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:16.629411   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:16.669488   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:16.669526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:12.961576   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.461132   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.461748   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:15.224068   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:17.225065   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:16.347305   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:18.847161   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.219244   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:19.232682   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:19.232744   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:19.264594   74485 cri.go:89] found id: ""
	I1105 19:14:19.264624   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.264635   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:19.264649   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:19.264708   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:19.301434   74485 cri.go:89] found id: ""
	I1105 19:14:19.301468   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.301479   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:19.301487   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:19.301558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:19.333465   74485 cri.go:89] found id: ""
	I1105 19:14:19.333494   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.333502   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:19.333508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:19.333558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:19.365865   74485 cri.go:89] found id: ""
	I1105 19:14:19.365892   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.365900   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:19.365906   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:19.365958   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:19.406533   74485 cri.go:89] found id: ""
	I1105 19:14:19.406563   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.406575   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:19.406583   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:19.406639   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:19.439351   74485 cri.go:89] found id: ""
	I1105 19:14:19.439377   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.439386   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:19.439392   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:19.439438   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:19.475033   74485 cri.go:89] found id: ""
	I1105 19:14:19.475058   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.475065   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:19.475070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:19.475119   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:19.508638   74485 cri.go:89] found id: ""
	I1105 19:14:19.508662   74485 logs.go:282] 0 containers: []
	W1105 19:14:19.508670   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:19.508678   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:19.508689   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:19.588268   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:19.588293   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:19.588304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:19.671382   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:19.671415   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:19.716497   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:19.716526   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:19.769686   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:19.769722   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.283476   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:22.296393   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:22.296456   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:22.331226   74485 cri.go:89] found id: ""
	I1105 19:14:22.331247   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.331255   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:22.331261   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:22.331306   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:22.363466   74485 cri.go:89] found id: ""
	I1105 19:14:22.363499   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.363510   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:22.363518   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:22.363586   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:22.397025   74485 cri.go:89] found id: ""
	I1105 19:14:22.397052   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.397061   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:22.397066   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:22.397116   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:22.429450   74485 cri.go:89] found id: ""
	I1105 19:14:22.429476   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.429486   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:22.429493   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:22.429554   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:22.461615   74485 cri.go:89] found id: ""
	I1105 19:14:22.461643   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.461654   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:22.461660   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:22.461728   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:22.492470   74485 cri.go:89] found id: ""
	I1105 19:14:22.492502   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.492513   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:22.492521   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:22.492587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:22.525335   74485 cri.go:89] found id: ""
	I1105 19:14:22.525358   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.525366   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:22.525372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:22.525423   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:22.558854   74485 cri.go:89] found id: ""
	I1105 19:14:22.558881   74485 logs.go:282] 0 containers: []
	W1105 19:14:22.558890   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:22.558901   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:22.558916   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:22.608638   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:22.608674   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:22.621769   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:22.621800   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:14:19.461812   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.960286   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:19.724482   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:22.224505   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:24.225072   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:21.347018   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:23.347099   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	W1105 19:14:22.688971   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:22.688998   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:22.689012   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:22.770517   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:22.770558   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:25.315778   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:25.335372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:25.335444   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:25.383988   74485 cri.go:89] found id: ""
	I1105 19:14:25.384019   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.384029   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:25.384036   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:25.384096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:25.432070   74485 cri.go:89] found id: ""
	I1105 19:14:25.432103   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.432115   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:25.432122   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:25.432184   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:25.464859   74485 cri.go:89] found id: ""
	I1105 19:14:25.464891   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.464902   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:25.464909   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:25.464976   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:25.498684   74485 cri.go:89] found id: ""
	I1105 19:14:25.498712   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.498719   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:25.498724   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:25.498777   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:25.532998   74485 cri.go:89] found id: ""
	I1105 19:14:25.533023   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.533032   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:25.533039   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:25.533084   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:25.568101   74485 cri.go:89] found id: ""
	I1105 19:14:25.568130   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.568138   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:25.568144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:25.568208   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:25.600470   74485 cri.go:89] found id: ""
	I1105 19:14:25.600495   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.600503   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:25.600509   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:25.600564   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:25.631792   74485 cri.go:89] found id: ""
	I1105 19:14:25.631824   74485 logs.go:282] 0 containers: []
	W1105 19:14:25.631834   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:25.631845   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:25.631860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:25.683820   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:25.683856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:25.698066   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:25.698095   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:25.764838   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:25.764869   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:25.764886   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:25.838791   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:25.838828   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:23.966002   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.460153   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:26.724324   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:29.223490   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:25.847528   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.346739   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:28.376183   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:28.389686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:28.389760   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:28.424180   74485 cri.go:89] found id: ""
	I1105 19:14:28.424209   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.424221   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:28.424229   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:28.424289   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:28.462742   74485 cri.go:89] found id: ""
	I1105 19:14:28.462765   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.462777   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:28.462784   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:28.462839   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:28.494550   74485 cri.go:89] found id: ""
	I1105 19:14:28.494574   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.494581   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:28.494588   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:28.494667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:28.525606   74485 cri.go:89] found id: ""
	I1105 19:14:28.525632   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.525639   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:28.525645   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:28.525696   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:28.558599   74485 cri.go:89] found id: ""
	I1105 19:14:28.558628   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.558638   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:28.558644   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:28.558701   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:28.590496   74485 cri.go:89] found id: ""
	I1105 19:14:28.590522   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.590530   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:28.590535   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:28.590599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:28.622748   74485 cri.go:89] found id: ""
	I1105 19:14:28.622772   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.622780   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:28.622786   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:28.622836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:28.656452   74485 cri.go:89] found id: ""
	I1105 19:14:28.656477   74485 logs.go:282] 0 containers: []
	W1105 19:14:28.656485   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:28.656493   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:28.656504   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.736458   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:28.736505   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:28.771923   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:28.771954   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:28.821099   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:28.821133   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:28.834698   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:28.834726   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:28.900543   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.400733   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:31.414573   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:31.414647   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:31.452244   74485 cri.go:89] found id: ""
	I1105 19:14:31.452275   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.452286   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:31.452293   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:31.452353   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:31.485898   74485 cri.go:89] found id: ""
	I1105 19:14:31.485920   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.485935   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:31.485940   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:31.486009   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:31.522826   74485 cri.go:89] found id: ""
	I1105 19:14:31.522850   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.522858   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:31.522865   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:31.522925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:31.560096   74485 cri.go:89] found id: ""
	I1105 19:14:31.560136   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.560164   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:31.560174   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:31.560234   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:31.596698   74485 cri.go:89] found id: ""
	I1105 19:14:31.596725   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.596733   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:31.596738   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:31.596792   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:31.635109   74485 cri.go:89] found id: ""
	I1105 19:14:31.635138   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.635148   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:31.635156   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:31.635221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:31.667612   74485 cri.go:89] found id: ""
	I1105 19:14:31.667639   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.667651   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:31.667658   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:31.667726   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:31.699815   74485 cri.go:89] found id: ""
	I1105 19:14:31.699844   74485 logs.go:282] 0 containers: []
	W1105 19:14:31.699854   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:31.699864   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:31.699879   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:31.737165   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:31.737196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:31.788513   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:31.788550   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:31.801580   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:31.801609   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:31.871658   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:31.871683   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:31.871696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:28.462108   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.961875   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:31.223977   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:33.724027   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:30.847090   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:32.847233   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.847857   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:34.450954   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:34.466129   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:34.466204   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:34.499984   74485 cri.go:89] found id: ""
	I1105 19:14:34.500009   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.500020   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:34.500027   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:34.500091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:34.532923   74485 cri.go:89] found id: ""
	I1105 19:14:34.532950   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.532958   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:34.532969   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:34.533017   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:34.566772   74485 cri.go:89] found id: ""
	I1105 19:14:34.566803   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.566811   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:34.566817   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:34.566872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:34.607398   74485 cri.go:89] found id: ""
	I1105 19:14:34.607422   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.607430   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:34.607435   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:34.607497   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:34.640091   74485 cri.go:89] found id: ""
	I1105 19:14:34.640123   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.640135   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:34.640143   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:34.640207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:34.677164   74485 cri.go:89] found id: ""
	I1105 19:14:34.677201   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.677211   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:34.677217   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:34.677266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:34.714900   74485 cri.go:89] found id: ""
	I1105 19:14:34.714931   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.714942   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:34.714949   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:34.715023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:34.751003   74485 cri.go:89] found id: ""
	I1105 19:14:34.751032   74485 logs.go:282] 0 containers: []
	W1105 19:14:34.751040   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:34.751048   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:34.751059   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:34.822279   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:34.822301   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:34.822315   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:34.898607   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:34.898640   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:34.934727   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:34.934754   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:34.985935   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:34.985969   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.500117   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:37.512467   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:37.512541   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:37.544914   74485 cri.go:89] found id: ""
	I1105 19:14:37.544941   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.544952   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:37.544959   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:37.545028   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:37.581507   74485 cri.go:89] found id: ""
	I1105 19:14:37.581535   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.581545   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:37.581553   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:37.581612   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:37.615546   74485 cri.go:89] found id: ""
	I1105 19:14:37.615576   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.615585   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:37.615592   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:37.615667   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:37.648239   74485 cri.go:89] found id: ""
	I1105 19:14:37.648267   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.648276   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:37.648283   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:37.648343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:33.460860   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:35.461416   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:36.224852   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:38.725488   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.347563   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:39.347732   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:37.682861   74485 cri.go:89] found id: ""
	I1105 19:14:37.682891   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.682898   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:37.682904   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:37.682952   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:37.715506   74485 cri.go:89] found id: ""
	I1105 19:14:37.715532   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.715540   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:37.715547   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:37.715597   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:37.747973   74485 cri.go:89] found id: ""
	I1105 19:14:37.748003   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.748014   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:37.748022   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:37.748083   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:37.780270   74485 cri.go:89] found id: ""
	I1105 19:14:37.780294   74485 logs.go:282] 0 containers: []
	W1105 19:14:37.780302   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:37.780310   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:37.780321   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:37.793885   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:37.793914   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:37.860114   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:37.860140   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:37.860154   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:37.941221   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:37.941255   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.980537   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:37.980567   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.532301   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:40.545540   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:40.545599   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:40.578642   74485 cri.go:89] found id: ""
	I1105 19:14:40.578687   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.578699   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:40.578707   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:40.578772   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:40.612049   74485 cri.go:89] found id: ""
	I1105 19:14:40.612078   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.612089   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:40.612097   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:40.612159   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:40.644495   74485 cri.go:89] found id: ""
	I1105 19:14:40.644519   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.644527   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:40.644532   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:40.644587   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:40.676890   74485 cri.go:89] found id: ""
	I1105 19:14:40.676923   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.676931   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:40.676937   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:40.676984   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:40.710095   74485 cri.go:89] found id: ""
	I1105 19:14:40.710125   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.710136   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:40.710144   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:40.710200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:40.748323   74485 cri.go:89] found id: ""
	I1105 19:14:40.748353   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.748364   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:40.748372   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:40.748501   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:40.781578   74485 cri.go:89] found id: ""
	I1105 19:14:40.781606   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.781618   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:40.781626   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:40.781689   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:40.816010   74485 cri.go:89] found id: ""
	I1105 19:14:40.816048   74485 logs.go:282] 0 containers: []
	W1105 19:14:40.816060   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:40.816071   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:40.816086   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:40.869836   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:40.869876   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:40.883436   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:40.883471   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:40.946538   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:40.946566   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:40.946585   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:41.023085   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:41.023123   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:37.962163   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.461278   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:40.726894   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.224939   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:41.847053   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:44.346789   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:43.566841   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:43.579425   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:43.579498   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:43.620500   74485 cri.go:89] found id: ""
	I1105 19:14:43.620526   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.620535   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:43.620541   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:43.620600   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:43.652992   74485 cri.go:89] found id: ""
	I1105 19:14:43.653024   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.653035   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:43.653042   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:43.653105   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:43.686945   74485 cri.go:89] found id: ""
	I1105 19:14:43.686991   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.687003   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:43.687010   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:43.687124   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:43.720075   74485 cri.go:89] found id: ""
	I1105 19:14:43.720103   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.720114   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:43.720121   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:43.720179   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:43.757969   74485 cri.go:89] found id: ""
	I1105 19:14:43.757997   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.758005   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:43.758011   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:43.758071   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:43.790068   74485 cri.go:89] found id: ""
	I1105 19:14:43.790094   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.790103   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:43.790109   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:43.790153   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:43.821696   74485 cri.go:89] found id: ""
	I1105 19:14:43.821722   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.821733   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:43.821741   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:43.821803   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:43.855976   74485 cri.go:89] found id: ""
	I1105 19:14:43.856003   74485 logs.go:282] 0 containers: []
	W1105 19:14:43.856011   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:43.856019   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:43.856029   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:43.934375   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:43.934409   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:43.972567   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:43.972597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:44.025660   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:44.025696   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:44.039229   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:44.039258   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:44.112179   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:46.612815   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:46.626070   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:46.626145   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:46.659184   74485 cri.go:89] found id: ""
	I1105 19:14:46.659210   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.659218   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:46.659227   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:46.659288   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:46.691887   74485 cri.go:89] found id: ""
	I1105 19:14:46.691917   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.691928   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:46.691934   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:46.692003   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:46.725745   74485 cri.go:89] found id: ""
	I1105 19:14:46.725776   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.725787   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:46.725795   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:46.725847   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:46.761733   74485 cri.go:89] found id: ""
	I1105 19:14:46.761762   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.761773   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:46.761780   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:46.761842   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:46.792926   74485 cri.go:89] found id: ""
	I1105 19:14:46.792955   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.792966   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:46.792974   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:46.793036   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:46.824462   74485 cri.go:89] found id: ""
	I1105 19:14:46.824503   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.824512   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:46.824519   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:46.824580   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:46.865057   74485 cri.go:89] found id: ""
	I1105 19:14:46.865082   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.865090   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:46.865095   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:46.865146   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:46.901357   74485 cri.go:89] found id: ""
	I1105 19:14:46.901385   74485 logs.go:282] 0 containers: []
	W1105 19:14:46.901393   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:46.901401   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:46.901414   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:46.951986   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:46.952021   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:46.966035   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:46.966065   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:47.035163   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:47.035184   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:47.035196   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:47.115825   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:47.115860   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:42.961397   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.460846   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.461570   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:45.724189   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:47.724319   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:46.847553   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.346787   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.658737   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:49.672088   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:49.672182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:49.708638   74485 cri.go:89] found id: ""
	I1105 19:14:49.708666   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.708674   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:49.708679   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:49.708736   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:49.744485   74485 cri.go:89] found id: ""
	I1105 19:14:49.744513   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.744521   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:49.744526   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:49.744572   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:49.779758   74485 cri.go:89] found id: ""
	I1105 19:14:49.779785   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.779794   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:49.779800   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:49.779858   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:49.814216   74485 cri.go:89] found id: ""
	I1105 19:14:49.814248   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.814256   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:49.814262   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:49.814310   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:49.851348   74485 cri.go:89] found id: ""
	I1105 19:14:49.851377   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.851389   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:49.851396   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:49.851455   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:49.883866   74485 cri.go:89] found id: ""
	I1105 19:14:49.883897   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.883906   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:49.883912   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:49.883959   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:49.916944   74485 cri.go:89] found id: ""
	I1105 19:14:49.916967   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.916975   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:49.916980   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:49.917039   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:49.950405   74485 cri.go:89] found id: ""
	I1105 19:14:49.950437   74485 logs.go:282] 0 containers: []
	W1105 19:14:49.950449   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:49.950459   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:49.950475   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:49.996064   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:49.996102   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:50.044865   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:50.044902   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:50.058206   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:50.058236   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:50.130371   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:50.130397   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:50.130412   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:49.960550   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.961271   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:49.724896   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.224128   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:51.346823   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:53.847102   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:52.706441   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:52.719571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:52.719655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:52.753850   74485 cri.go:89] found id: ""
	I1105 19:14:52.753880   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.753891   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:52.753899   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:52.753961   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:52.794112   74485 cri.go:89] found id: ""
	I1105 19:14:52.794139   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.794149   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:52.794156   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:52.794218   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:52.830151   74485 cri.go:89] found id: ""
	I1105 19:14:52.830178   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.830188   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:52.830195   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:52.830258   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:52.864803   74485 cri.go:89] found id: ""
	I1105 19:14:52.864832   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.864853   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:52.864868   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:52.864930   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:52.897237   74485 cri.go:89] found id: ""
	I1105 19:14:52.897271   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.897282   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:52.897289   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:52.897351   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:52.932236   74485 cri.go:89] found id: ""
	I1105 19:14:52.932262   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.932270   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:52.932275   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:52.932319   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:52.965781   74485 cri.go:89] found id: ""
	I1105 19:14:52.965808   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.965817   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:52.965825   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:52.965918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:52.999098   74485 cri.go:89] found id: ""
	I1105 19:14:52.999121   74485 logs.go:282] 0 containers: []
	W1105 19:14:52.999129   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:52.999137   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:52.999146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:53.051085   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:53.051127   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:53.064690   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:53.064717   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:53.128334   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:53.128358   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:53.128372   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:53.207751   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:53.207791   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:55.745430   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:55.758734   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:55.758821   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:55.791827   74485 cri.go:89] found id: ""
	I1105 19:14:55.791854   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.791862   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:55.791868   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:55.791922   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:55.824191   74485 cri.go:89] found id: ""
	I1105 19:14:55.824217   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.824224   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:55.824230   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:55.824278   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:55.858579   74485 cri.go:89] found id: ""
	I1105 19:14:55.858611   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.858619   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:55.858625   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:55.858673   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:55.891579   74485 cri.go:89] found id: ""
	I1105 19:14:55.891604   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.891612   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:55.891617   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:55.891663   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:55.924881   74485 cri.go:89] found id: ""
	I1105 19:14:55.924910   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.924920   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:55.924930   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:55.924999   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:55.956634   74485 cri.go:89] found id: ""
	I1105 19:14:55.956663   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.956678   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:55.956686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:55.956742   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:55.988770   74485 cri.go:89] found id: ""
	I1105 19:14:55.988803   74485 logs.go:282] 0 containers: []
	W1105 19:14:55.988814   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:55.988821   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:55.988880   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:56.022236   74485 cri.go:89] found id: ""
	I1105 19:14:56.022257   74485 logs.go:282] 0 containers: []
	W1105 19:14:56.022266   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:56.022273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:56.022284   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:56.073035   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:56.073071   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:56.086899   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:56.086923   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:56.158219   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:56.158247   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:56.158259   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:56.246621   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:56.246660   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:53.962537   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.461516   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:54.724612   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:56.725381   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:59.223995   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:55.847591   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.346027   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:00.349718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:14:58.791443   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:14:58.804398   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:14:58.804476   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:14:58.837812   74485 cri.go:89] found id: ""
	I1105 19:14:58.837840   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.837856   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:14:58.837863   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:14:58.837926   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:14:58.870154   74485 cri.go:89] found id: ""
	I1105 19:14:58.870186   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.870197   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:14:58.870204   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:14:58.870268   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:14:58.906518   74485 cri.go:89] found id: ""
	I1105 19:14:58.906545   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.906553   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:14:58.906563   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:14:58.906614   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:14:58.939320   74485 cri.go:89] found id: ""
	I1105 19:14:58.939346   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.939357   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:14:58.939364   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:14:58.939426   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:14:58.974116   74485 cri.go:89] found id: ""
	I1105 19:14:58.974143   74485 logs.go:282] 0 containers: []
	W1105 19:14:58.974153   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:14:58.974160   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:14:58.974221   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:14:59.006820   74485 cri.go:89] found id: ""
	I1105 19:14:59.006854   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.006866   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:14:59.006873   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:14:59.006933   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:14:59.039691   74485 cri.go:89] found id: ""
	I1105 19:14:59.039723   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.039735   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:14:59.039742   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:14:59.039800   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:14:59.071829   74485 cri.go:89] found id: ""
	I1105 19:14:59.071860   74485 logs.go:282] 0 containers: []
	W1105 19:14:59.071881   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:14:59.071893   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:14:59.071906   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:14:59.124158   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:14:59.124195   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:14:59.138563   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:14:59.138594   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:14:59.216148   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:14:59.216174   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:14:59.216189   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:14:59.295262   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:14:59.295297   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:01.833789   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:01.847332   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:01.847408   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:01.882721   74485 cri.go:89] found id: ""
	I1105 19:15:01.882743   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.882750   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:01.882755   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:01.882811   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:01.916457   74485 cri.go:89] found id: ""
	I1105 19:15:01.916479   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.916487   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:01.916502   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:01.916557   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:01.950521   74485 cri.go:89] found id: ""
	I1105 19:15:01.950552   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.950564   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:01.950571   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:01.950624   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:01.985823   74485 cri.go:89] found id: ""
	I1105 19:15:01.985852   74485 logs.go:282] 0 containers: []
	W1105 19:15:01.985862   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:01.985870   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:01.985918   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:02.021689   74485 cri.go:89] found id: ""
	I1105 19:15:02.021720   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.021731   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:02.021739   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:02.021804   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:02.058632   74485 cri.go:89] found id: ""
	I1105 19:15:02.058658   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.058666   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:02.058672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:02.058738   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:02.097916   74485 cri.go:89] found id: ""
	I1105 19:15:02.097947   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.097956   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:02.097961   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:02.098010   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:02.131992   74485 cri.go:89] found id: ""
	I1105 19:15:02.132027   74485 logs.go:282] 0 containers: []
	W1105 19:15:02.132038   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:02.132050   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:02.132066   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:02.188605   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:02.188645   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:02.201873   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:02.201904   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:02.274767   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:02.274795   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:02.274811   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:02.358520   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:02.358559   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:14:58.962072   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.461009   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:01.224719   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:03.724333   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:02.847593   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.348665   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:04.897693   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:04.913131   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:04.913207   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:04.952546   74485 cri.go:89] found id: ""
	I1105 19:15:04.952571   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.952579   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:04.952584   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:04.952643   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:04.987334   74485 cri.go:89] found id: ""
	I1105 19:15:04.987360   74485 logs.go:282] 0 containers: []
	W1105 19:15:04.987368   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:04.987374   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:04.987434   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:05.021873   74485 cri.go:89] found id: ""
	I1105 19:15:05.021906   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.021919   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:05.021926   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:05.021985   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:05.056169   74485 cri.go:89] found id: ""
	I1105 19:15:05.056199   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.056208   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:05.056213   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:05.056265   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:05.093090   74485 cri.go:89] found id: ""
	I1105 19:15:05.093117   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.093125   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:05.093130   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:05.093182   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:05.127533   74485 cri.go:89] found id: ""
	I1105 19:15:05.127557   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.127564   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:05.127576   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:05.127625   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:05.165127   74485 cri.go:89] found id: ""
	I1105 19:15:05.165162   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.165173   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:05.165180   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:05.165243   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:05.200526   74485 cri.go:89] found id: ""
	I1105 19:15:05.200556   74485 logs.go:282] 0 containers: []
	W1105 19:15:05.200567   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:05.200578   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:05.200593   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:05.247497   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:05.247535   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:05.261963   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:05.261996   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:05.336813   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:05.336833   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:05.336844   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:05.412278   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:05.412320   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:03.461266   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.463142   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:05.728530   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:08.227700   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.848748   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:10.346754   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:07.951085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:07.966125   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:07.966203   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:08.004253   74485 cri.go:89] found id: ""
	I1105 19:15:08.004291   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.004302   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:08.004310   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:08.004373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:08.039539   74485 cri.go:89] found id: ""
	I1105 19:15:08.039562   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.039569   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:08.039575   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:08.039629   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:08.076043   74485 cri.go:89] found id: ""
	I1105 19:15:08.076080   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.076093   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:08.076101   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:08.076157   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:08.110489   74485 cri.go:89] found id: ""
	I1105 19:15:08.110512   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.110519   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:08.110525   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:08.110589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:08.147532   74485 cri.go:89] found id: ""
	I1105 19:15:08.147564   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.147574   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:08.147580   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:08.147628   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:08.182225   74485 cri.go:89] found id: ""
	I1105 19:15:08.182248   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.182256   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:08.182263   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:08.182322   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:08.223488   74485 cri.go:89] found id: ""
	I1105 19:15:08.223524   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.223536   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:08.223544   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:08.223610   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:08.266524   74485 cri.go:89] found id: ""
	I1105 19:15:08.266559   74485 logs.go:282] 0 containers: []
	W1105 19:15:08.266571   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:08.266582   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:08.266597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:08.279036   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:08.279061   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:08.346030   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:08.346052   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:08.346064   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:08.428081   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:08.428118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:08.464760   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:08.464789   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.016193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:11.030598   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:11.030681   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:11.066035   74485 cri.go:89] found id: ""
	I1105 19:15:11.066064   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.066073   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:11.066078   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:11.066133   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:11.103906   74485 cri.go:89] found id: ""
	I1105 19:15:11.103937   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.103948   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:11.103955   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:11.104023   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:11.142936   74485 cri.go:89] found id: ""
	I1105 19:15:11.143024   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.143034   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:11.143041   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:11.143091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:11.180041   74485 cri.go:89] found id: ""
	I1105 19:15:11.180074   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.180086   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:11.180094   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:11.180158   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:11.215661   74485 cri.go:89] found id: ""
	I1105 19:15:11.215693   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.215701   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:11.215707   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:11.215758   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:11.252603   74485 cri.go:89] found id: ""
	I1105 19:15:11.252651   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.252663   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:11.252672   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:11.252739   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:11.299295   74485 cri.go:89] found id: ""
	I1105 19:15:11.299328   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.299340   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:11.299347   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:11.299402   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:11.355153   74485 cri.go:89] found id: ""
	I1105 19:15:11.355177   74485 logs.go:282] 0 containers: []
	W1105 19:15:11.355185   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:11.355193   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:11.355206   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:11.441076   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:11.441118   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:11.480367   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:11.480396   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:11.534646   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:11.534683   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:11.548141   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:11.548170   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:11.616452   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:07.961073   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:09.962118   73732 pod_ready.go:103] pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.455874   73732 pod_ready.go:82] duration metric: took 4m0.000853559s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:12.455911   73732 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vw2sm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:15:12.455936   73732 pod_ready.go:39] duration metric: took 4m14.55377544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:12.455984   73732 kubeadm.go:597] duration metric: took 4m23.030552871s to restartPrimaryControlPlane
	W1105 19:15:12.456078   73732 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:12.456111   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:10.724247   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.725886   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:12.846646   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.848074   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:14.117448   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:14.131224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:14.131297   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:14.167811   74485 cri.go:89] found id: ""
	I1105 19:15:14.167843   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.167855   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:14.167862   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:14.167921   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:14.204128   74485 cri.go:89] found id: ""
	I1105 19:15:14.204156   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.204164   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:14.204169   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:14.204232   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:14.240687   74485 cri.go:89] found id: ""
	I1105 19:15:14.240716   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.240727   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:14.240735   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:14.240788   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:14.274204   74485 cri.go:89] found id: ""
	I1105 19:15:14.274231   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.274242   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:14.274249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:14.274307   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:14.312090   74485 cri.go:89] found id: ""
	I1105 19:15:14.312119   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.312130   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:14.312139   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:14.312200   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:14.346824   74485 cri.go:89] found id: ""
	I1105 19:15:14.346857   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.346868   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:14.346875   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:14.346934   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:14.380634   74485 cri.go:89] found id: ""
	I1105 19:15:14.380668   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.380679   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:14.380686   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:14.380746   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:14.414402   74485 cri.go:89] found id: ""
	I1105 19:15:14.414432   74485 logs.go:282] 0 containers: []
	W1105 19:15:14.414441   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:14.414449   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:14.414459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:14.464542   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:14.464581   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:14.478195   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:14.478225   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:14.553670   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:14.553693   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:14.553708   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:14.634619   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:14.634659   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.174085   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:17.191712   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:17.191771   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:17.234101   74485 cri.go:89] found id: ""
	I1105 19:15:17.234132   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.234143   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:17.234149   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:17.234213   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:17.281548   74485 cri.go:89] found id: ""
	I1105 19:15:17.281574   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.281581   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:17.281588   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:17.281655   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:17.337698   74485 cri.go:89] found id: ""
	I1105 19:15:17.337727   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.337735   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:17.337743   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:17.337790   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:17.371756   74485 cri.go:89] found id: ""
	I1105 19:15:17.371782   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.371790   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:17.371796   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:17.371854   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:17.404989   74485 cri.go:89] found id: ""
	I1105 19:15:17.405015   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.405026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:17.405033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:17.405096   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:17.438613   74485 cri.go:89] found id: ""
	I1105 19:15:17.438637   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.438648   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:17.438656   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:17.438717   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:17.470465   74485 cri.go:89] found id: ""
	I1105 19:15:17.470494   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.470502   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:17.470508   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:17.470558   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:17.503835   74485 cri.go:89] found id: ""
	I1105 19:15:17.503867   74485 logs.go:282] 0 containers: []
	W1105 19:15:17.503876   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:17.503884   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:17.503896   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:17.584110   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:17.584146   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:17.626928   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:17.626955   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:15.223749   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.225434   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.347847   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:19.847047   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:17.679356   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:17.679397   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:17.693476   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:17.693506   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:17.766809   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.266926   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:20.282219   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:20.282293   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:20.322133   74485 cri.go:89] found id: ""
	I1105 19:15:20.322163   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.322171   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:20.322178   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:20.322248   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:20.357030   74485 cri.go:89] found id: ""
	I1105 19:15:20.357072   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.357084   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:20.357091   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:20.357156   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:20.390523   74485 cri.go:89] found id: ""
	I1105 19:15:20.390549   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.390559   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:20.390567   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:20.390631   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:20.425807   74485 cri.go:89] found id: ""
	I1105 19:15:20.425830   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.425837   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:20.425843   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:20.425903   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:20.461984   74485 cri.go:89] found id: ""
	I1105 19:15:20.462014   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.462026   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:20.462033   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:20.462094   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:20.495689   74485 cri.go:89] found id: ""
	I1105 19:15:20.495725   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.495739   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:20.495746   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:20.495799   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:20.528666   74485 cri.go:89] found id: ""
	I1105 19:15:20.528701   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.528713   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:20.528721   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:20.528783   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:20.562566   74485 cri.go:89] found id: ""
	I1105 19:15:20.562596   74485 logs.go:282] 0 containers: []
	W1105 19:15:20.562606   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:20.562614   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:20.562624   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:20.610961   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:20.611000   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:20.623898   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:20.623928   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:20.696412   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:20.696440   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:20.696456   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:20.779601   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:20.779642   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:19.725198   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.224019   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.225286   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:22.347992   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:24.846718   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:23.319846   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:23.333278   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:23.333357   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:23.370771   74485 cri.go:89] found id: ""
	I1105 19:15:23.370796   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.370805   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:23.370810   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:23.370872   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:23.405994   74485 cri.go:89] found id: ""
	I1105 19:15:23.406021   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.406029   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:23.406034   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:23.406092   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:23.443729   74485 cri.go:89] found id: ""
	I1105 19:15:23.443757   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.443767   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:23.443774   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:23.443836   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:23.476162   74485 cri.go:89] found id: ""
	I1105 19:15:23.476188   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.476197   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:23.476205   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:23.476266   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:23.509325   74485 cri.go:89] found id: ""
	I1105 19:15:23.509353   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.509363   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:23.509371   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:23.509427   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:23.541880   74485 cri.go:89] found id: ""
	I1105 19:15:23.541912   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.541922   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:23.541929   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:23.541993   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:23.574204   74485 cri.go:89] found id: ""
	I1105 19:15:23.574236   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.574248   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:23.574256   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:23.574323   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:23.606865   74485 cri.go:89] found id: ""
	I1105 19:15:23.606896   74485 logs.go:282] 0 containers: []
	W1105 19:15:23.606908   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:23.606918   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:23.606932   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:23.673771   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:23.673792   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:23.673803   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:23.753298   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:23.753335   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:23.792273   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:23.792304   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:23.843072   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:23.843110   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.356859   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:26.369417   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:26.369488   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:26.403611   74485 cri.go:89] found id: ""
	I1105 19:15:26.403639   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.403647   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:26.403653   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:26.403725   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:26.439891   74485 cri.go:89] found id: ""
	I1105 19:15:26.439924   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.439936   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:26.439943   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:26.439991   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:26.473502   74485 cri.go:89] found id: ""
	I1105 19:15:26.473542   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.473554   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:26.473561   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:26.473640   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:26.505666   74485 cri.go:89] found id: ""
	I1105 19:15:26.505695   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.505703   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:26.505710   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:26.505769   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:26.539781   74485 cri.go:89] found id: ""
	I1105 19:15:26.539815   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.539827   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:26.539835   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:26.539911   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:26.574673   74485 cri.go:89] found id: ""
	I1105 19:15:26.574712   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.574721   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:26.574727   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:26.574773   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:26.608410   74485 cri.go:89] found id: ""
	I1105 19:15:26.608433   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.608441   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:26.608446   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:26.608494   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:26.644036   74485 cri.go:89] found id: ""
	I1105 19:15:26.644065   74485 logs.go:282] 0 containers: []
	W1105 19:15:26.644076   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:26.644087   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:26.644098   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.718901   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:26.718937   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:26.758920   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:26.758953   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:26.811241   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:26.811277   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:26.824931   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:26.824961   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:26.891799   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:26.725062   74141 pod_ready.go:103] pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:27.724594   74141 pod_ready.go:82] duration metric: took 4m0.006622979s for pod "metrics-server-6867b74b74-44mcg" in "kube-system" namespace to be "Ready" ...
	E1105 19:15:27.724627   74141 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1105 19:15:27.724644   74141 pod_ready.go:39] duration metric: took 4m0.807889519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:27.724663   74141 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:15:27.724711   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:27.724769   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:27.771870   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:27.771897   74141 cri.go:89] found id: ""
	I1105 19:15:27.771906   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:27.771966   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.776484   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:27.776553   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:27.823529   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:27.823564   74141 cri.go:89] found id: ""
	I1105 19:15:27.823576   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:27.823638   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.828600   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:27.828685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:27.878206   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:27.878242   74141 cri.go:89] found id: ""
	I1105 19:15:27.878254   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:27.878317   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.882545   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:27.882640   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:27.920102   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:27.920127   74141 cri.go:89] found id: ""
	I1105 19:15:27.920137   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:27.920189   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.924516   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:27.924593   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:27.969493   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:27.969523   74141 cri.go:89] found id: ""
	I1105 19:15:27.969534   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:27.969589   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:27.973637   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:27.973724   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:28.014369   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.014396   74141 cri.go:89] found id: ""
	I1105 19:15:28.014405   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:28.014463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.019040   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:28.019112   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:28.056411   74141 cri.go:89] found id: ""
	I1105 19:15:28.056438   74141 logs.go:282] 0 containers: []
	W1105 19:15:28.056446   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:28.056452   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:28.056502   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:28.099541   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.099562   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.099566   74141 cri.go:89] found id: ""
	I1105 19:15:28.099573   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:28.099628   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.104144   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:28.108443   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:28.108465   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:28.153262   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:28.153302   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:28.197210   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:28.197237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:28.242915   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:28.242944   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:28.257468   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:28.257497   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:28.299795   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:28.299825   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:28.333983   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:28.334015   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:28.369174   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:28.369202   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:28.405838   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:28.405869   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:28.477842   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:28.477880   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:28.595832   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:28.595865   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:28.639146   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:28.639179   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:28.689519   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:28.689554   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:26.846977   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:28.847878   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:29.392417   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:29.405249   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:29.405331   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:29.437397   74485 cri.go:89] found id: ""
	I1105 19:15:29.437432   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.437443   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:29.437450   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:29.437504   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:29.469908   74485 cri.go:89] found id: ""
	I1105 19:15:29.469938   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.469946   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:29.469951   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:29.470008   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:29.502302   74485 cri.go:89] found id: ""
	I1105 19:15:29.502331   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.502339   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:29.502345   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:29.502391   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:29.534285   74485 cri.go:89] found id: ""
	I1105 19:15:29.534309   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.534317   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:29.534322   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:29.534373   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:29.571918   74485 cri.go:89] found id: ""
	I1105 19:15:29.571962   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.571973   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:29.571983   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:29.572042   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:29.605324   74485 cri.go:89] found id: ""
	I1105 19:15:29.605354   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.605365   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:29.605373   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:29.605435   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:29.640181   74485 cri.go:89] found id: ""
	I1105 19:15:29.640210   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.640218   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:29.640224   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:29.640273   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:29.671121   74485 cri.go:89] found id: ""
	I1105 19:15:29.671147   74485 logs.go:282] 0 containers: []
	W1105 19:15:29.671155   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:29.671164   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:29.671174   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:29.750821   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:29.750856   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:29.787452   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:29.787479   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:29.840413   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:29.840459   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:29.855540   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:29.855580   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:29.925849   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:32.426016   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:32.438759   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:32.438824   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:32.476376   74485 cri.go:89] found id: ""
	I1105 19:15:32.476406   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.476416   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:15:32.476423   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:32.476490   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:32.512328   74485 cri.go:89] found id: ""
	I1105 19:15:32.512352   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.512360   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:15:32.512365   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:32.512414   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:32.546803   74485 cri.go:89] found id: ""
	I1105 19:15:32.546833   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.546844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:15:32.546851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:32.546925   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:32.585904   74485 cri.go:89] found id: ""
	I1105 19:15:32.585934   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.585946   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:15:32.585953   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:32.586014   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:32.620976   74485 cri.go:89] found id: ""
	I1105 19:15:32.621005   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.621012   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:15:32.621018   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:32.621082   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.668028   74141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:31.684024   74141 api_server.go:72] duration metric: took 4m12.496021782s to wait for apiserver process to appear ...
	I1105 19:15:31.684060   74141 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:15:31.684105   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:31.684163   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:31.719462   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:31.719496   74141 cri.go:89] found id: ""
	I1105 19:15:31.719506   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:31.719559   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.723632   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:31.723702   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:31.761976   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:31.762001   74141 cri.go:89] found id: ""
	I1105 19:15:31.762010   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:31.762067   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.766066   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:31.766137   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:31.799673   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:31.799694   74141 cri.go:89] found id: ""
	I1105 19:15:31.799701   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:31.799753   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.803632   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:31.803714   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:31.841782   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:31.841808   74141 cri.go:89] found id: ""
	I1105 19:15:31.841818   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:31.841873   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.850409   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:31.850471   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:31.891932   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:31.891959   74141 cri.go:89] found id: ""
	I1105 19:15:31.891969   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:31.892026   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.896065   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:31.896125   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:31.932759   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:31.932781   74141 cri.go:89] found id: ""
	I1105 19:15:31.932788   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:31.932831   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:31.936611   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:31.936685   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:31.971296   74141 cri.go:89] found id: ""
	I1105 19:15:31.971328   74141 logs.go:282] 0 containers: []
	W1105 19:15:31.971339   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:31.971348   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:31.971410   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:32.006153   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:32.006173   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.006177   74141 cri.go:89] found id: ""
	I1105 19:15:32.006184   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:32.006226   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.010159   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:32.013807   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.013831   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.084222   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:32.084273   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:32.127875   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:32.127928   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:32.173008   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:32.173041   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:32.235366   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.235402   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.714822   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:32.714861   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:32.750733   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.750761   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.796233   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.796264   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.809269   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.809296   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:32.931162   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:32.931196   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:32.968551   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:32.968578   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:33.008115   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:33.008152   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:33.046201   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:33.046237   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:31.346652   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:33.347118   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:32.658958   74485 cri.go:89] found id: ""
	I1105 19:15:32.659006   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.659018   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:15:32.659026   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:32.659091   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:32.694317   74485 cri.go:89] found id: ""
	I1105 19:15:32.694341   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.694349   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:32.694354   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:15:32.694403   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:15:32.728277   74485 cri.go:89] found id: ""
	I1105 19:15:32.728314   74485 logs.go:282] 0 containers: []
	W1105 19:15:32.728327   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:15:32.728338   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:32.728352   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:32.815579   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:15:32.815615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:32.856776   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:32.856807   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:32.909477   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:32.909518   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:32.923789   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:32.923817   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:15:32.997898   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:15:35.498040   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:15:35.511537   74485 kubeadm.go:597] duration metric: took 4m4.46832509s to restartPrimaryControlPlane
	W1105 19:15:35.511612   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:15:35.511644   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:15:35.586678   74141 api_server.go:253] Checking apiserver healthz at https://192.168.50.10:8444/healthz ...
	I1105 19:15:35.591512   74141 api_server.go:279] https://192.168.50.10:8444/healthz returned 200:
	ok
	I1105 19:15:35.592489   74141 api_server.go:141] control plane version: v1.31.2
	I1105 19:15:35.592507   74141 api_server.go:131] duration metric: took 3.908440367s to wait for apiserver health ...
	I1105 19:15:35.592514   74141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:15:35.592538   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:15:35.592589   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:15:35.636389   74141 cri.go:89] found id: "a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.636408   74141 cri.go:89] found id: ""
	I1105 19:15:35.636416   74141 logs.go:282] 1 containers: [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9]
	I1105 19:15:35.636463   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.640778   74141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:15:35.640839   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:15:35.676793   74141 cri.go:89] found id: "e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:35.676818   74141 cri.go:89] found id: ""
	I1105 19:15:35.676828   74141 logs.go:282] 1 containers: [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e]
	I1105 19:15:35.676890   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.681596   74141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:15:35.681669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:15:35.721728   74141 cri.go:89] found id: "531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:35.721754   74141 cri.go:89] found id: ""
	I1105 19:15:35.721763   74141 logs.go:282] 1 containers: [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20]
	I1105 19:15:35.721808   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.725619   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:15:35.725677   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:15:35.765348   74141 cri.go:89] found id: "6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:35.765377   74141 cri.go:89] found id: ""
	I1105 19:15:35.765386   74141 logs.go:282] 1 containers: [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a]
	I1105 19:15:35.765439   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.769594   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:15:35.769669   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:15:35.809427   74141 cri.go:89] found id: "e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:35.809452   74141 cri.go:89] found id: ""
	I1105 19:15:35.809460   74141 logs.go:282] 1 containers: [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb]
	I1105 19:15:35.809505   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.814317   74141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:15:35.814376   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:15:35.853861   74141 cri.go:89] found id: "4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:35.853882   74141 cri.go:89] found id: ""
	I1105 19:15:35.853890   74141 logs.go:282] 1 containers: [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2]
	I1105 19:15:35.853934   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.857734   74141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:15:35.857787   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:15:35.897791   74141 cri.go:89] found id: ""
	I1105 19:15:35.897816   74141 logs.go:282] 0 containers: []
	W1105 19:15:35.897824   74141 logs.go:284] No container was found matching "kindnet"
	I1105 19:15:35.897830   74141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1105 19:15:35.897887   74141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1105 19:15:35.940906   74141 cri.go:89] found id: "44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:35.940940   74141 cri.go:89] found id: "6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:35.940946   74141 cri.go:89] found id: ""
	I1105 19:15:35.940954   74141 logs.go:282] 2 containers: [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976]
	I1105 19:15:35.941006   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.945200   74141 ssh_runner.go:195] Run: which crictl
	I1105 19:15:35.948860   74141 logs.go:123] Gathering logs for kube-apiserver [a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9] ...
	I1105 19:15:35.948884   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8de930573a6457a7b185c8483c255164e4e6482a27b9cd0266db6334d1986f9"
	I1105 19:15:35.992660   74141 logs.go:123] Gathering logs for etcd [e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e] ...
	I1105 19:15:35.992690   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6393e5b4069d2a351c5ab5a27babeb06b7b193b001894190e64c9a9753c2d1e"
	I1105 19:15:36.033586   74141 logs.go:123] Gathering logs for kube-scheduler [6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a] ...
	I1105 19:15:36.033617   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf66f706c934fe28b1f02636e35541cdca26ceade172084075c88ff247efd5a"
	I1105 19:15:36.066599   74141 logs.go:123] Gathering logs for kube-proxy [e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb] ...
	I1105 19:15:36.066643   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8180f551c559c9bf94379032a3bec0e030dc898b5e8e7185cdcef077d771beb"
	I1105 19:15:36.104895   74141 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:15:36.104932   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:15:36.489747   74141 logs.go:123] Gathering logs for container status ...
	I1105 19:15:36.489781   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 19:15:36.531923   74141 logs.go:123] Gathering logs for kubelet ...
	I1105 19:15:36.531952   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:15:36.598718   74141 logs.go:123] Gathering logs for dmesg ...
	I1105 19:15:36.598758   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:15:36.612969   74141 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:15:36.612998   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 19:15:36.718535   74141 logs.go:123] Gathering logs for coredns [531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20] ...
	I1105 19:15:36.718568   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 531bb8d98703d9a464b264ddbe4fbfbb07f71e9b682e31bb3d31f79b15639c20"
	I1105 19:15:36.755636   74141 logs.go:123] Gathering logs for kube-controller-manager [4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2] ...
	I1105 19:15:36.755677   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a77037302cd0a2f754569fc0c516c0efeccb58e333eb21471feac44f99553b2"
	I1105 19:15:36.815561   74141 logs.go:123] Gathering logs for storage-provisioner [44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b] ...
	I1105 19:15:36.815640   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44080c0e289a78a9a8c35dec12d54526a07b2a6bb03d267ed78afb3848c2037b"
	I1105 19:15:36.850878   74141 logs.go:123] Gathering logs for storage-provisioner [6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976] ...
	I1105 19:15:36.850904   74141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6039942d4d993f3e367b17f122517f183de2512ff68b850b9096da1b30671976"
	I1105 19:15:39.390699   74141 system_pods.go:59] 8 kube-system pods found
	I1105 19:15:39.390733   74141 system_pods.go:61] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.390738   74141 system_pods.go:61] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.390743   74141 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.390747   74141 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.390750   74141 system_pods.go:61] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.390753   74141 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.390760   74141 system_pods.go:61] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.390764   74141 system_pods.go:61] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.390771   74141 system_pods.go:74] duration metric: took 3.798251189s to wait for pod list to return data ...
	I1105 19:15:39.390777   74141 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:15:39.393894   74141 default_sa.go:45] found service account: "default"
	I1105 19:15:39.393914   74141 default_sa.go:55] duration metric: took 3.132788ms for default service account to be created ...
	I1105 19:15:39.393929   74141 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:15:39.398455   74141 system_pods.go:86] 8 kube-system pods found
	I1105 19:15:39.398480   74141 system_pods.go:89] "coredns-7c65d6cfc9-cdvml" [0b47fc10-0352-47df-aef2-46083091a840] Running
	I1105 19:15:39.398485   74141 system_pods.go:89] "etcd-default-k8s-diff-port-608095" [18456b47-391c-4c3f-a836-31cd663edfad] Running
	I1105 19:15:39.398490   74141 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-608095" [f7c66ae7-7ae1-4d4c-89ab-2a1c031a9a9a] Running
	I1105 19:15:39.398494   74141 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-608095" [aae9d29e-3785-4bb3-b3b7-99fda9489f2a] Running
	I1105 19:15:39.398497   74141 system_pods.go:89] "kube-proxy-8v42c" [007c81ba-8ec7-4cdf-87a0-17c9225a3aa0] Running
	I1105 19:15:39.398501   74141 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-608095" [0f831537-6b0e-4c1c-9ecf-cac491d47338] Running
	I1105 19:15:39.398508   74141 system_pods.go:89] "metrics-server-6867b74b74-44mcg" [1af2bd4e-49d9-4126-9192-7d2697e2a601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:15:39.398512   74141 system_pods.go:89] "storage-provisioner" [df6efb9a-59ec-4296-baa4-91bbac895315] Running
	I1105 19:15:39.398520   74141 system_pods.go:126] duration metric: took 4.586494ms to wait for k8s-apps to be running ...
	I1105 19:15:39.398529   74141 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:15:39.398569   74141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.413878   74141 system_svc.go:56] duration metric: took 15.340417ms WaitForService to wait for kubelet
	I1105 19:15:39.413908   74141 kubeadm.go:582] duration metric: took 4m20.225910976s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:15:39.413936   74141 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:15:39.416851   74141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:15:39.416870   74141 node_conditions.go:123] node cpu capacity is 2
	I1105 19:15:39.416880   74141 node_conditions.go:105] duration metric: took 2.939584ms to run NodePressure ...
	I1105 19:15:39.416891   74141 start.go:241] waiting for startup goroutines ...
	I1105 19:15:39.416899   74141 start.go:246] waiting for cluster config update ...
	I1105 19:15:39.416911   74141 start.go:255] writing updated cluster config ...
	I1105 19:15:39.417211   74141 ssh_runner.go:195] Run: rm -f paused
	I1105 19:15:39.463773   74141 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:15:39.465688   74141 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-608095" cluster and "default" namespace by default
	I1105 19:15:39.702249   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.19058336s)
	I1105 19:15:39.702314   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:39.717966   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:39.728114   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:39.740451   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:39.740476   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:39.740519   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:39.751089   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:39.751150   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:39.761832   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:39.771841   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:39.771904   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:39.782332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.792379   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:39.792438   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:39.801625   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:39.811691   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:39.811740   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:39.821162   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:39.891377   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:15:39.891443   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:40.034176   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:40.034337   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:40.034476   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:15:40.211588   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:35.847491   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:38.346965   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.348252   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:40.213724   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:40.213838   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:40.213939   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:40.214048   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:40.214172   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:40.214266   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:40.214375   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:40.214478   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:40.214567   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:40.214687   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:40.214819   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:40.214884   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:40.214980   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:40.358606   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:40.632263   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:40.766570   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:40.885914   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:40.902379   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:40.903647   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:40.903716   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:41.040274   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:41.042093   74485 out.go:235]   - Booting up control plane ...
	I1105 19:15:41.042222   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:41.048448   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:41.058445   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:41.059466   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:41.062648   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:15:38.649673   73732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193536212s)
	I1105 19:15:38.649753   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:15:38.665214   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:15:38.674520   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:15:38.684078   73732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:15:38.684102   73732 kubeadm.go:157] found existing configuration files:
	
	I1105 19:15:38.684151   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:15:38.693169   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:15:38.693239   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:15:38.702305   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:15:38.710796   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:15:38.710868   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:15:38.719716   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.728090   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:15:38.728143   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:15:38.737219   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:15:38.745625   73732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:15:38.745692   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:15:38.754684   73732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:15:38.914343   73732 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:15:42.847011   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:44.851431   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:47.368221   73732 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:15:47.368296   73732 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:15:47.368405   73732 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:15:47.368552   73732 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:15:47.368686   73732 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:15:47.368787   73732 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:15:47.370333   73732 out.go:235]   - Generating certificates and keys ...
	I1105 19:15:47.370429   73732 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:15:47.370529   73732 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:15:47.370650   73732 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:15:47.370763   73732 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:15:47.370900   73732 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:15:47.371009   73732 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:15:47.371110   73732 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:15:47.371198   73732 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:15:47.371312   73732 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:15:47.371431   73732 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:15:47.371494   73732 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:15:47.371573   73732 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:15:47.371656   73732 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:15:47.371725   73732 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:15:47.371797   73732 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:15:47.371893   73732 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:15:47.371976   73732 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:15:47.372074   73732 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:15:47.372160   73732 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:15:47.374386   73732 out.go:235]   - Booting up control plane ...
	I1105 19:15:47.374503   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:15:47.374622   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:15:47.374707   73732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:15:47.374838   73732 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:15:47.374950   73732 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:15:47.375046   73732 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:15:47.375226   73732 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:15:47.375367   73732 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:15:47.375450   73732 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.124171ms
	I1105 19:15:47.375549   73732 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:15:47.375647   73732 kubeadm.go:310] [api-check] The API server is healthy after 5.001431223s
	I1105 19:15:47.375804   73732 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:15:47.375968   73732 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:15:47.376055   73732 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:15:47.376321   73732 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-271881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:15:47.376412   73732 kubeadm.go:310] [bootstrap-token] Using token: 2xak8n.owgv6oncwawjarav
	I1105 19:15:47.377766   73732 out.go:235]   - Configuring RBAC rules ...
	I1105 19:15:47.377911   73732 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:15:47.378024   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:15:47.378138   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:15:47.378243   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:15:47.378337   73732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:15:47.378408   73732 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:15:47.378502   73732 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:15:47.378541   73732 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:15:47.378580   73732 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:15:47.378587   73732 kubeadm.go:310] 
	I1105 19:15:47.378635   73732 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:15:47.378645   73732 kubeadm.go:310] 
	I1105 19:15:47.378711   73732 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:15:47.378718   73732 kubeadm.go:310] 
	I1105 19:15:47.378760   73732 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:15:47.378813   73732 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:15:47.378856   73732 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:15:47.378860   73732 kubeadm.go:310] 
	I1105 19:15:47.378910   73732 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:15:47.378913   73732 kubeadm.go:310] 
	I1105 19:15:47.378955   73732 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:15:47.378959   73732 kubeadm.go:310] 
	I1105 19:15:47.379030   73732 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:15:47.379114   73732 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:15:47.379195   73732 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:15:47.379203   73732 kubeadm.go:310] 
	I1105 19:15:47.379320   73732 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:15:47.379427   73732 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:15:47.379442   73732 kubeadm.go:310] 
	I1105 19:15:47.379559   73732 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.379718   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:15:47.379762   73732 kubeadm.go:310] 	--control-plane 
	I1105 19:15:47.379770   73732 kubeadm.go:310] 
	I1105 19:15:47.379844   73732 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:15:47.379851   73732 kubeadm.go:310] 
	I1105 19:15:47.379977   73732 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2xak8n.owgv6oncwawjarav \
	I1105 19:15:47.380150   73732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:15:47.380167   73732 cni.go:84] Creating CNI manager for ""
	I1105 19:15:47.380174   73732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:15:47.381714   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:15:47.382944   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:15:47.394080   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:15:47.411715   73732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:15:47.411773   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.411821   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-271881 minikube.k8s.io/updated_at=2024_11_05T19_15_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=embed-certs-271881 minikube.k8s.io/primary=true
	I1105 19:15:47.439084   73732 ops.go:34] apiserver oom_adj: -16
	I1105 19:15:47.601691   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:47.348094   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:49.847296   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:48.102103   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:48.602767   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.101780   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:49.601826   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.101976   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:50.602763   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.102779   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:51.601930   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.102574   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:15:52.241636   73732 kubeadm.go:1113] duration metric: took 4.829922813s to wait for elevateKubeSystemPrivileges
	I1105 19:15:52.241680   73732 kubeadm.go:394] duration metric: took 5m2.866246993s to StartCluster
	I1105 19:15:52.241704   73732 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.241801   73732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:15:52.244409   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:15:52.244716   73732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:15:52.244789   73732 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:15:52.244893   73732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-271881"
	I1105 19:15:52.244914   73732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-271881"
	I1105 19:15:52.244911   73732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-271881"
	I1105 19:15:52.244933   73732 addons.go:69] Setting metrics-server=true in profile "embed-certs-271881"
	I1105 19:15:52.244941   73732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-271881"
	I1105 19:15:52.244954   73732 addons.go:234] Setting addon metrics-server=true in "embed-certs-271881"
	W1105 19:15:52.244965   73732 addons.go:243] addon metrics-server should already be in state true
	I1105 19:15:52.244998   73732 config.go:182] Loaded profile config "embed-certs-271881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1105 19:15:52.244925   73732 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:15:52.245001   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245065   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.245404   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245422   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245436   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.245455   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245464   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.245543   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.246341   73732 out.go:177] * Verifying Kubernetes components...
	I1105 19:15:52.247801   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:15:52.261802   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I1105 19:15:52.262325   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.262955   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.263159   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.263591   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.264367   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.264413   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.265696   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I1105 19:15:52.265941   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I1105 19:15:52.266161   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266322   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.266776   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266782   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.266800   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.266803   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.267185   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267224   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.267353   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.267804   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.267846   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.271094   73732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-271881"
	W1105 19:15:52.271117   73732 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:15:52.271147   73732 host.go:66] Checking if "embed-certs-271881" exists ...
	I1105 19:15:52.271509   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.271554   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.284180   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I1105 19:15:52.284456   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1105 19:15:52.284703   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.284925   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.285248   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285261   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285355   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.285363   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.285578   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285727   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.285766   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.285862   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.287834   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.288259   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.290341   73732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:15:52.290346   73732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:15:52.290695   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I1105 19:15:52.291040   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.291464   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.291479   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.291776   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.291974   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:15:52.291994   73732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:15:52.292015   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292054   73732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.292067   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:15:52.292079   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.292355   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:15:52.292400   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:15:52.295296   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295650   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.295675   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295701   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.295797   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.295969   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296102   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296247   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.296272   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.296305   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.296582   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.296714   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.296848   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.296947   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.314049   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I1105 19:15:52.314561   73732 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:15:52.315148   73732 main.go:141] libmachine: Using API Version  1
	I1105 19:15:52.315168   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:15:52.315884   73732 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:15:52.316080   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetState
	I1105 19:15:52.318146   73732 main.go:141] libmachine: (embed-certs-271881) Calling .DriverName
	I1105 19:15:52.318465   73732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.318478   73732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:15:52.318496   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHHostname
	I1105 19:15:52.321312   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321825   73732 main.go:141] libmachine: (embed-certs-271881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:3c:9f", ip: ""} in network mk-embed-certs-271881: {Iface:virbr1 ExpiryTime:2024-11-05 20:10:33 +0000 UTC Type:0 Mac:52:54:00:df:3c:9f Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:embed-certs-271881 Clientid:01:52:54:00:df:3c:9f}
	I1105 19:15:52.321850   73732 main.go:141] libmachine: (embed-certs-271881) DBG | domain embed-certs-271881 has defined IP address 192.168.39.58 and MAC address 52:54:00:df:3c:9f in network mk-embed-certs-271881
	I1105 19:15:52.321885   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHPort
	I1105 19:15:52.322095   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHKeyPath
	I1105 19:15:52.322238   73732 main.go:141] libmachine: (embed-certs-271881) Calling .GetSSHUsername
	I1105 19:15:52.322397   73732 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/embed-certs-271881/id_rsa Username:docker}
	I1105 19:15:52.453762   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:15:52.483722   73732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493492   73732 node_ready.go:49] node "embed-certs-271881" has status "Ready":"True"
	I1105 19:15:52.493519   73732 node_ready.go:38] duration metric: took 9.757528ms for node "embed-certs-271881" to be "Ready" ...
	I1105 19:15:52.493530   73732 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:15:52.508208   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:15:52.577925   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:15:52.589366   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:15:52.589389   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:15:52.612570   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:15:52.612593   73732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:15:52.645851   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:15:52.647686   73732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:52.647713   73732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:15:52.668865   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:15:53.246894   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246918   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.246923   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.246950   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247230   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247277   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247305   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247323   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247338   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247349   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247331   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247368   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.247378   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.247710   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247739   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247746   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.247779   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.247800   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.247811   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.269143   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.269165   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.269465   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.269479   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.269483   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.494717   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.494741   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495080   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495100   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495114   73732 main.go:141] libmachine: Making call to close driver server
	I1105 19:15:53.495123   73732 main.go:141] libmachine: (embed-certs-271881) Calling .Close
	I1105 19:15:53.495348   73732 main.go:141] libmachine: (embed-certs-271881) DBG | Closing plugin on server side
	I1105 19:15:53.495394   73732 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:15:53.495414   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:15:53.495427   73732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-271881"
	I1105 19:15:53.497126   73732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:15:52.347616   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:54.352434   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:53.498891   73732 addons.go:510] duration metric: took 1.254108253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:15:54.518219   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:57.015647   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:56.846198   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:58.847684   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:15:59.514759   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:01.514818   73732 pod_ready.go:103] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:02.515124   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.515148   73732 pod_ready.go:82] duration metric: took 10.006914802s for pod "coredns-7c65d6cfc9-7dk86" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.515158   73732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519864   73732 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.519889   73732 pod_ready.go:82] duration metric: took 4.723101ms for pod "coredns-7c65d6cfc9-v5vt6" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.519900   73732 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524948   73732 pod_ready.go:93] pod "etcd-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.524970   73732 pod_ready.go:82] duration metric: took 5.063029ms for pod "etcd-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.524979   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529710   73732 pod_ready.go:93] pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.529739   73732 pod_ready.go:82] duration metric: took 4.753888ms for pod "kube-apiserver-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.529750   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534282   73732 pod_ready.go:93] pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.534301   73732 pod_ready.go:82] duration metric: took 4.544677ms for pod "kube-controller-manager-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.534309   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912364   73732 pod_ready.go:93] pod "kube-proxy-nfxcj" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:02.912387   73732 pod_ready.go:82] duration metric: took 378.071939ms for pod "kube-proxy-nfxcj" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:02.912397   73732 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311793   73732 pod_ready.go:93] pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace has status "Ready":"True"
	I1105 19:16:03.311816   73732 pod_ready.go:82] duration metric: took 399.412502ms for pod "kube-scheduler-embed-certs-271881" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:03.311822   73732 pod_ready.go:39] duration metric: took 10.818282425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:03.311836   73732 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:16:03.311883   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:16:03.327913   73732 api_server.go:72] duration metric: took 11.083157176s to wait for apiserver process to appear ...
	I1105 19:16:03.327947   73732 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:16:03.327968   73732 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1105 19:16:03.334499   73732 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1105 19:16:03.335530   73732 api_server.go:141] control plane version: v1.31.2
	I1105 19:16:03.335550   73732 api_server.go:131] duration metric: took 7.596072ms to wait for apiserver health ...
	I1105 19:16:03.335558   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:16:03.514782   73732 system_pods.go:59] 9 kube-system pods found
	I1105 19:16:03.514813   73732 system_pods.go:61] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.514820   73732 system_pods.go:61] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.514825   73732 system_pods.go:61] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.514830   73732 system_pods.go:61] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.514835   73732 system_pods.go:61] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.514840   73732 system_pods.go:61] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.514844   73732 system_pods.go:61] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.514854   73732 system_pods.go:61] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.514859   73732 system_pods.go:61] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.514868   73732 system_pods.go:74] duration metric: took 179.304519ms to wait for pod list to return data ...
	I1105 19:16:03.514877   73732 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:16:03.712690   73732 default_sa.go:45] found service account: "default"
	I1105 19:16:03.712719   73732 default_sa.go:55] duration metric: took 197.831177ms for default service account to be created ...
	I1105 19:16:03.712731   73732 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:16:03.916858   73732 system_pods.go:86] 9 kube-system pods found
	I1105 19:16:03.916893   73732 system_pods.go:89] "coredns-7c65d6cfc9-7dk86" [170744f6-4b55-458d-a270-a8aa397c9cd3] Running
	I1105 19:16:03.916902   73732 system_pods.go:89] "coredns-7c65d6cfc9-v5vt6" [ebe11308-47aa-454a-97bd-5e6c5145a99a] Running
	I1105 19:16:03.916908   73732 system_pods.go:89] "etcd-embed-certs-271881" [9bd68561-ed9d-4bd4-982d-3be2521e3003] Running
	I1105 19:16:03.916913   73732 system_pods.go:89] "kube-apiserver-embed-certs-271881" [5cfcf85c-0452-4e68-b608-ba2b7d87f4c5] Running
	I1105 19:16:03.916918   73732 system_pods.go:89] "kube-controller-manager-embed-certs-271881" [5cfbef06-69ab-47ad-bc32-ff0e9494efbf] Running
	I1105 19:16:03.916921   73732 system_pods.go:89] "kube-proxy-nfxcj" [2910ec66-6528-4d00-91c0-588a93c54fcf] Running
	I1105 19:16:03.916924   73732 system_pods.go:89] "kube-scheduler-embed-certs-271881" [c146cdcf-bf28-4ebf-8475-b36ff24b3b99] Running
	I1105 19:16:03.916934   73732 system_pods.go:89] "metrics-server-6867b74b74-tvl8v" [fb0b97cb-ee9c-40cf-9fc1-defcd11fad19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:16:03.916941   73732 system_pods.go:89] "storage-provisioner" [18a73546-576b-456e-9a91-a2a0d62880dd] Running
	I1105 19:16:03.916953   73732 system_pods.go:126] duration metric: took 204.215711ms to wait for k8s-apps to be running ...
	I1105 19:16:03.916963   73732 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:16:03.917019   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:03.931369   73732 system_svc.go:56] duration metric: took 14.397556ms WaitForService to wait for kubelet
	I1105 19:16:03.931407   73732 kubeadm.go:582] duration metric: took 11.686653516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:16:03.931454   73732 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:16:04.111904   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:16:04.111928   73732 node_conditions.go:123] node cpu capacity is 2
	I1105 19:16:04.111937   73732 node_conditions.go:105] duration metric: took 180.475073ms to run NodePressure ...
	I1105 19:16:04.111947   73732 start.go:241] waiting for startup goroutines ...
	I1105 19:16:04.111953   73732 start.go:246] waiting for cluster config update ...
	I1105 19:16:04.111962   73732 start.go:255] writing updated cluster config ...
	I1105 19:16:04.112197   73732 ssh_runner.go:195] Run: rm -f paused
	I1105 19:16:04.158775   73732 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:16:04.160801   73732 out.go:177] * Done! kubectl is now configured to use "embed-certs-271881" cluster and "default" namespace by default
	I1105 19:16:01.346039   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:03.346369   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:05.846866   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:08.346383   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:10.346570   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:12.347171   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:14.846335   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.347002   73496 pod_ready.go:103] pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace has status "Ready":"False"
	I1105 19:16:17.840591   73496 pod_ready.go:82] duration metric: took 4m0.000143963s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" ...
	E1105 19:16:17.840620   73496 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5sp2j" in "kube-system" namespace to be "Ready" (will not retry!)
	I1105 19:16:17.840649   73496 pod_ready.go:39] duration metric: took 4m11.022533189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:17.840682   73496 kubeadm.go:597] duration metric: took 4m18.432062793s to restartPrimaryControlPlane
	W1105 19:16:17.840732   73496 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1105 19:16:17.840755   73496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:16:21.064069   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:16:21.064607   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:21.064798   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:26.065202   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:26.065410   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:36.065932   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:36.066151   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:43.960239   73496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.119460606s)
	I1105 19:16:43.960324   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:16:43.986199   73496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 19:16:43.999287   73496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:16:44.013653   73496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:16:44.013675   73496 kubeadm.go:157] found existing configuration files:
	
	I1105 19:16:44.013718   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:16:44.026073   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:16:44.026140   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:16:44.038723   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:16:44.050880   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:16:44.050957   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:16:44.061696   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.071739   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:16:44.072301   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:16:44.084030   73496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:16:44.093217   73496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:16:44.093275   73496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:16:44.102494   73496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:16:44.267623   73496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:16:52.534375   73496 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 19:16:52.534458   73496 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:16:52.534569   73496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:16:52.534704   73496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:16:52.534834   73496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 19:16:52.534930   73496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:16:52.536666   73496 out.go:235]   - Generating certificates and keys ...
	I1105 19:16:52.536759   73496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:16:52.536836   73496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:16:52.536911   73496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:16:52.536963   73496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:16:52.537060   73496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:16:52.537145   73496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:16:52.537232   73496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:16:52.537286   73496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:16:52.537361   73496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:16:52.537455   73496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:16:52.537500   73496 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:16:52.537578   73496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:16:52.537648   73496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:16:52.537725   73496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 19:16:52.537797   73496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:16:52.537905   73496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:16:52.537988   73496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:16:52.538075   73496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:16:52.538136   73496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:16:52.539588   73496 out.go:235]   - Booting up control plane ...
	I1105 19:16:52.539669   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:16:52.539743   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:16:52.539800   73496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:16:52.539885   73496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:16:52.539987   73496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:16:52.540057   73496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:16:52.540206   73496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 19:16:52.540300   73496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 19:16:52.540367   73496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733469ms
	I1105 19:16:52.540447   73496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 19:16:52.540528   73496 kubeadm.go:310] [api-check] The API server is healthy after 5.001962829s
	I1105 19:16:52.540651   73496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 19:16:52.540806   73496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 19:16:52.540899   73496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 19:16:52.541094   73496 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-459223 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 19:16:52.541164   73496 kubeadm.go:310] [bootstrap-token] Using token: f0bzzt.jihwqjda853aoxrb
	I1105 19:16:52.543528   73496 out.go:235]   - Configuring RBAC rules ...
	I1105 19:16:52.543658   73496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 19:16:52.543777   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 19:16:52.543942   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 19:16:52.544072   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 19:16:52.544222   73496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 19:16:52.544327   73496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 19:16:52.544453   73496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 19:16:52.544493   73496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 19:16:52.544536   73496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 19:16:52.544542   73496 kubeadm.go:310] 
	I1105 19:16:52.544593   73496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 19:16:52.544599   73496 kubeadm.go:310] 
	I1105 19:16:52.544687   73496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 19:16:52.544701   73496 kubeadm.go:310] 
	I1105 19:16:52.544739   73496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 19:16:52.544795   73496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 19:16:52.544855   73496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 19:16:52.544881   73496 kubeadm.go:310] 
	I1105 19:16:52.544958   73496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 19:16:52.544971   73496 kubeadm.go:310] 
	I1105 19:16:52.545039   73496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 19:16:52.545049   73496 kubeadm.go:310] 
	I1105 19:16:52.545111   73496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 19:16:52.545193   73496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 19:16:52.545251   73496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 19:16:52.545257   73496 kubeadm.go:310] 
	I1105 19:16:52.545324   73496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 19:16:52.545403   73496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 19:16:52.545409   73496 kubeadm.go:310] 
	I1105 19:16:52.545480   73496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.545605   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c \
	I1105 19:16:52.545638   73496 kubeadm.go:310] 	--control-plane 
	I1105 19:16:52.545648   73496 kubeadm.go:310] 
	I1105 19:16:52.545779   73496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 19:16:52.545794   73496 kubeadm.go:310] 
	I1105 19:16:52.545903   73496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f0bzzt.jihwqjda853aoxrb \
	I1105 19:16:52.546059   73496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4ac191d0d9001df9a0e25ed8eebdbd407be514a604e452dfe4ba273d8229d47c 
	I1105 19:16:52.546074   73496 cni.go:84] Creating CNI manager for ""
	I1105 19:16:52.546083   73496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 19:16:52.548357   73496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 19:16:52.549732   73496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 19:16:52.560406   73496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 19:16:52.577268   73496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 19:16:52.577334   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:52.577373   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-459223 minikube.k8s.io/updated_at=2024_11_05T19_16_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=no-preload-459223 minikube.k8s.io/primary=true
	I1105 19:16:52.776299   73496 ops.go:34] apiserver oom_adj: -16
	I1105 19:16:52.776456   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.276618   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:53.777474   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.276726   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:54.777004   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.276725   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.777410   73496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 19:16:55.893941   73496 kubeadm.go:1113] duration metric: took 3.316665512s to wait for elevateKubeSystemPrivileges
	I1105 19:16:55.893984   73496 kubeadm.go:394] duration metric: took 4m56.532038314s to StartCluster
	I1105 19:16:55.894007   73496 settings.go:142] acquiring lock: {Name:mk2e796c383ac0fe2824db6d82d35bc537cb77a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.894104   73496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 19:16:55.896620   73496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/kubeconfig: {Name:mke2f2db224c8017df5999395ac33ac4f769b38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 19:16:55.896934   73496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.101 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 19:16:55.897120   73496 config.go:182] Loaded profile config "no-preload-459223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 19:16:55.897056   73496 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 19:16:55.897166   73496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-459223"
	I1105 19:16:55.897176   73496 addons.go:69] Setting default-storageclass=true in profile "no-preload-459223"
	I1105 19:16:55.897186   73496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-459223"
	I1105 19:16:55.897193   73496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-459223"
	I1105 19:16:55.897211   73496 addons.go:69] Setting metrics-server=true in profile "no-preload-459223"
	I1105 19:16:55.897231   73496 addons.go:234] Setting addon metrics-server=true in "no-preload-459223"
	W1105 19:16:55.897243   73496 addons.go:243] addon metrics-server should already be in state true
	I1105 19:16:55.897271   73496 host.go:66] Checking if "no-preload-459223" exists ...
	W1105 19:16:55.897195   73496 addons.go:243] addon storage-provisioner should already be in state true
	I1105 19:16:55.897323   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.897599   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897642   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897705   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897754   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.897711   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.897811   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.898341   73496 out.go:177] * Verifying Kubernetes components...
	I1105 19:16:55.899778   73496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 19:16:55.914218   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1105 19:16:55.914305   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1105 19:16:55.914726   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.914837   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.915283   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915305   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915391   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.915418   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.915642   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915757   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.915804   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.916323   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.916367   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.916858   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1105 19:16:55.917296   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.917805   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.917832   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.918156   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.918678   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.918720   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.919527   73496 addons.go:234] Setting addon default-storageclass=true in "no-preload-459223"
	W1105 19:16:55.919549   73496 addons.go:243] addon default-storageclass should already be in state true
	I1105 19:16:55.919576   73496 host.go:66] Checking if "no-preload-459223" exists ...
	I1105 19:16:55.919954   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.919996   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.932547   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I1105 19:16:55.933026   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.933588   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.933601   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.933918   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.934153   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.936094   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.937415   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I1105 19:16:55.937800   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.937812   73496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1105 19:16:55.938312   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.938324   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.938420   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I1105 19:16:55.938661   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.938816   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.938867   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 19:16:55.938894   73496 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 19:16:55.938918   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.939014   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.939350   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.939362   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.939855   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.940281   73496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 19:16:55.940310   73496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 19:16:55.940959   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.942661   73496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 19:16:55.942797   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943216   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.943392   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.943422   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.943588   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.943842   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.944078   73496 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:55.944083   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.944096   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 19:16:55.944114   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.947574   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.947767   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.947789   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.948125   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.948249   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.948343   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.948424   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:55.987691   73496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I1105 19:16:55.988131   73496 main.go:141] libmachine: () Calling .GetVersion
	I1105 19:16:55.988714   73496 main.go:141] libmachine: Using API Version  1
	I1105 19:16:55.988739   73496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 19:16:55.989127   73496 main.go:141] libmachine: () Calling .GetMachineName
	I1105 19:16:55.989325   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetState
	I1105 19:16:55.991207   73496 main.go:141] libmachine: (no-preload-459223) Calling .DriverName
	I1105 19:16:55.991453   73496 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:55.991472   73496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 19:16:55.991492   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHHostname
	I1105 19:16:55.994362   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994800   73496 main.go:141] libmachine: (no-preload-459223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:84:79", ip: ""} in network mk-no-preload-459223: {Iface:virbr4 ExpiryTime:2024-11-05 20:11:34 +0000 UTC Type:0 Mac:52:54:00:6c:84:79 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:no-preload-459223 Clientid:01:52:54:00:6c:84:79}
	I1105 19:16:55.994846   73496 main.go:141] libmachine: (no-preload-459223) DBG | domain no-preload-459223 has defined IP address 192.168.72.101 and MAC address 52:54:00:6c:84:79 in network mk-no-preload-459223
	I1105 19:16:55.994938   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHPort
	I1105 19:16:55.995145   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHKeyPath
	I1105 19:16:55.995315   73496 main.go:141] libmachine: (no-preload-459223) Calling .GetSSHUsername
	I1105 19:16:55.996088   73496 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/no-preload-459223/id_rsa Username:docker}
	I1105 19:16:56.109142   73496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 19:16:56.126382   73496 node_ready.go:35] waiting up to 6m0s for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138050   73496 node_ready.go:49] node "no-preload-459223" has status "Ready":"True"
	I1105 19:16:56.138076   73496 node_ready.go:38] duration metric: took 11.661265ms for node "no-preload-459223" to be "Ready" ...
	I1105 19:16:56.138087   73496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:16:56.143325   73496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:16:56.230205   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 19:16:56.230228   73496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1105 19:16:56.232603   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 19:16:56.259360   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 19:16:56.259388   73496 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 19:16:56.268694   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 19:16:56.321334   73496 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:56.321364   73496 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 19:16:56.387409   73496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 19:16:57.010417   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010441   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010496   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010522   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010748   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.010795   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010804   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010812   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010818   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.010817   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.010830   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.010838   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.010843   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.011143   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011147   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.011205   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011221   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.011209   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.011298   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074127   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.074148   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.074476   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.074543   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.074508   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.135875   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.135898   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136259   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136280   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136278   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136291   73496 main.go:141] libmachine: Making call to close driver server
	I1105 19:16:57.136308   73496 main.go:141] libmachine: (no-preload-459223) Calling .Close
	I1105 19:16:57.136703   73496 main.go:141] libmachine: (no-preload-459223) DBG | Closing plugin on server side
	I1105 19:16:57.136747   73496 main.go:141] libmachine: Successfully made call to close driver server
	I1105 19:16:57.136757   73496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 19:16:57.136767   73496 addons.go:475] Verifying addon metrics-server=true in "no-preload-459223"
	I1105 19:16:57.138699   73496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1105 19:16:56.066834   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:16:56.067140   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:16:57.140755   73496 addons.go:510] duration metric: took 1.243699533s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1105 19:16:58.154376   73496 pod_ready.go:103] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"False"
	I1105 19:17:00.149838   73496 pod_ready.go:93] pod "etcd-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:00.149864   73496 pod_ready.go:82] duration metric: took 4.006514005s for pod "etcd-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:00.149876   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156460   73496 pod_ready.go:93] pod "kube-apiserver-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.156486   73496 pod_ready.go:82] duration metric: took 1.006602068s for pod "kube-apiserver-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.156499   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160598   73496 pod_ready.go:93] pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.160618   73496 pod_ready.go:82] duration metric: took 4.110322ms for pod "kube-controller-manager-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.160631   73496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164461   73496 pod_ready.go:93] pod "kube-scheduler-no-preload-459223" in "kube-system" namespace has status "Ready":"True"
	I1105 19:17:01.164482   73496 pod_ready.go:82] duration metric: took 3.842329ms for pod "kube-scheduler-no-preload-459223" in "kube-system" namespace to be "Ready" ...
	I1105 19:17:01.164492   73496 pod_ready.go:39] duration metric: took 5.026393011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 19:17:01.164509   73496 api_server.go:52] waiting for apiserver process to appear ...
	I1105 19:17:01.164566   73496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 19:17:01.183307   73496 api_server.go:72] duration metric: took 5.286331754s to wait for apiserver process to appear ...
	I1105 19:17:01.183338   73496 api_server.go:88] waiting for apiserver healthz status ...
	I1105 19:17:01.183357   73496 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8443/healthz ...
	I1105 19:17:01.189083   73496 api_server.go:279] https://192.168.72.101:8443/healthz returned 200:
	ok
	I1105 19:17:01.190439   73496 api_server.go:141] control plane version: v1.31.2
	I1105 19:17:01.190469   73496 api_server.go:131] duration metric: took 7.123058ms to wait for apiserver health ...
	I1105 19:17:01.190479   73496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 19:17:01.198820   73496 system_pods.go:59] 9 kube-system pods found
	I1105 19:17:01.198854   73496 system_pods.go:61] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198862   73496 system_pods.go:61] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.198869   73496 system_pods.go:61] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.198873   73496 system_pods.go:61] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.198879   73496 system_pods.go:61] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.198883   73496 system_pods.go:61] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.198887   73496 system_pods.go:61] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.198893   73496 system_pods.go:61] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.198896   73496 system_pods.go:61] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.198903   73496 system_pods.go:74] duration metric: took 8.418414ms to wait for pod list to return data ...
	I1105 19:17:01.198913   73496 default_sa.go:34] waiting for default service account to be created ...
	I1105 19:17:01.202229   73496 default_sa.go:45] found service account: "default"
	I1105 19:17:01.202251   73496 default_sa.go:55] duration metric: took 3.332652ms for default service account to be created ...
	I1105 19:17:01.202260   73496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 19:17:01.208774   73496 system_pods.go:86] 9 kube-system pods found
	I1105 19:17:01.208803   73496 system_pods.go:89] "coredns-7c65d6cfc9-gl9th" [9bee65a6-f684-4675-b356-62602fa628c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208811   73496 system_pods.go:89] "coredns-7c65d6cfc9-xx9wl" [17910730-8b50-4223-8af5-82b701aa2f96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 19:17:01.208817   73496 system_pods.go:89] "etcd-no-preload-459223" [6a299398-7d5f-44d0-a7ad-5742117ea3eb] Running
	I1105 19:17:01.208821   73496 system_pods.go:89] "kube-apiserver-no-preload-459223" [bb96a5b1-0e25-4c8a-ac83-8e35a22144e8] Running
	I1105 19:17:01.208825   73496 system_pods.go:89] "kube-controller-manager-no-preload-459223" [04dd5b30-192e-48b9-b643-9868b288155a] Running
	I1105 19:17:01.208828   73496 system_pods.go:89] "kube-proxy-txq44" [5f4a537b-e4cc-4254-9a22-679795366362] Running
	I1105 19:17:01.208833   73496 system_pods.go:89] "kube-scheduler-no-preload-459223" [a5162f2c-563c-47fd-8ab7-720959246f7e] Running
	I1105 19:17:01.208838   73496 system_pods.go:89] "metrics-server-6867b74b74-qbgx4" [41686f85-3122-40a1-9c77-70ddef66069e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1105 19:17:01.208842   73496 system_pods.go:89] "storage-provisioner" [4743de2f-37ed-4b92-ac4e-4bcbff5897b1] Running
	I1105 19:17:01.208848   73496 system_pods.go:126] duration metric: took 6.584071ms to wait for k8s-apps to be running ...
	I1105 19:17:01.208856   73496 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 19:17:01.208898   73496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:01.225005   73496 system_svc.go:56] duration metric: took 16.138051ms WaitForService to wait for kubelet
	I1105 19:17:01.225038   73496 kubeadm.go:582] duration metric: took 5.328067688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 19:17:01.225062   73496 node_conditions.go:102] verifying NodePressure condition ...
	I1105 19:17:01.347771   73496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 19:17:01.347799   73496 node_conditions.go:123] node cpu capacity is 2
	I1105 19:17:01.347813   73496 node_conditions.go:105] duration metric: took 122.746343ms to run NodePressure ...
	I1105 19:17:01.347826   73496 start.go:241] waiting for startup goroutines ...
	I1105 19:17:01.347834   73496 start.go:246] waiting for cluster config update ...
	I1105 19:17:01.347846   73496 start.go:255] writing updated cluster config ...
	I1105 19:17:01.348126   73496 ssh_runner.go:195] Run: rm -f paused
	I1105 19:17:01.396396   73496 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 19:17:01.398528   73496 out.go:177] * Done! kubectl is now configured to use "no-preload-459223" cluster and "default" namespace by default
	I1105 19:17:36.069129   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:17:36.069396   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:17:36.069426   74485 kubeadm.go:310] 
	I1105 19:17:36.069489   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:17:36.069572   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:17:36.069591   74485 kubeadm.go:310] 
	I1105 19:17:36.069638   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:17:36.069699   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:17:36.069843   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:17:36.069852   74485 kubeadm.go:310] 
	I1105 19:17:36.069967   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:17:36.070017   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:17:36.070067   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:17:36.070074   74485 kubeadm.go:310] 
	I1105 19:17:36.070216   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:17:36.070328   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:17:36.070345   74485 kubeadm.go:310] 
	I1105 19:17:36.070486   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:17:36.070622   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:17:36.070690   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:17:36.070758   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:17:36.070767   74485 kubeadm.go:310] 
	I1105 19:17:36.071471   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:17:36.071558   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:17:36.071652   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1105 19:17:36.071791   74485 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1105 19:17:36.071838   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1105 19:17:36.527864   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 19:17:36.543211   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 19:17:36.552656   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 19:17:36.552676   74485 kubeadm.go:157] found existing configuration files:
	
	I1105 19:17:36.552734   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 19:17:36.562296   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 19:17:36.562360   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 19:17:36.571759   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 19:17:36.580534   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 19:17:36.580586   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 19:17:36.590320   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.599165   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 19:17:36.599235   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 19:17:36.608340   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 19:17:36.616935   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 19:17:36.616986   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 19:17:36.625948   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 19:17:36.843267   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 19:19:32.770686   74485 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1105 19:19:32.770828   74485 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1105 19:19:32.772504   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1105 19:19:32.772564   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 19:19:32.772656   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 19:19:32.772784   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 19:19:32.772893   74485 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1105 19:19:32.772971   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 19:19:32.774648   74485 out.go:235]   - Generating certificates and keys ...
	I1105 19:19:32.774726   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 19:19:32.774804   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 19:19:32.774902   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1105 19:19:32.775012   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1105 19:19:32.775144   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1105 19:19:32.775223   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1105 19:19:32.775307   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1105 19:19:32.775397   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1105 19:19:32.775487   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1105 19:19:32.775597   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1105 19:19:32.775651   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I1105 19:19:32.775728   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 19:19:32.775796   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 19:19:32.775864   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 19:19:32.775961   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 19:19:32.776041   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 19:19:32.776175   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 19:19:32.776281   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 19:19:32.776330   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 19:19:32.776417   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 19:19:32.777837   74485 out.go:235]   - Booting up control plane ...
	I1105 19:19:32.777940   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 19:19:32.778032   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 19:19:32.778134   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 19:19:32.778248   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 19:19:32.778489   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1105 19:19:32.778563   74485 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1105 19:19:32.778652   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.778960   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779080   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779302   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779399   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779663   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.779766   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.779990   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780051   74485 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1105 19:19:32.780241   74485 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1105 19:19:32.780260   74485 kubeadm.go:310] 
	I1105 19:19:32.780325   74485 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1105 19:19:32.780381   74485 kubeadm.go:310] 		timed out waiting for the condition
	I1105 19:19:32.780391   74485 kubeadm.go:310] 
	I1105 19:19:32.780438   74485 kubeadm.go:310] 	This error is likely caused by:
	I1105 19:19:32.780486   74485 kubeadm.go:310] 		- The kubelet is not running
	I1105 19:19:32.780627   74485 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1105 19:19:32.780639   74485 kubeadm.go:310] 
	I1105 19:19:32.780748   74485 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1105 19:19:32.780790   74485 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1105 19:19:32.780819   74485 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1105 19:19:32.780825   74485 kubeadm.go:310] 
	I1105 19:19:32.780961   74485 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1105 19:19:32.781048   74485 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1105 19:19:32.781055   74485 kubeadm.go:310] 
	I1105 19:19:32.781144   74485 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1105 19:19:32.781225   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1105 19:19:32.781293   74485 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1105 19:19:32.781394   74485 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1105 19:19:32.781475   74485 kubeadm.go:394] duration metric: took 8m1.792270232s to StartCluster
	I1105 19:19:32.781485   74485 kubeadm.go:310] 
	I1105 19:19:32.781522   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 19:19:32.781589   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 19:19:32.825435   74485 cri.go:89] found id: ""
	I1105 19:19:32.825465   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.825475   74485 logs.go:284] No container was found matching "kube-apiserver"
	I1105 19:19:32.825482   74485 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 19:19:32.825543   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 19:19:32.859245   74485 cri.go:89] found id: ""
	I1105 19:19:32.859275   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.859286   74485 logs.go:284] No container was found matching "etcd"
	I1105 19:19:32.859293   74485 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 19:19:32.859355   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 19:19:32.890801   74485 cri.go:89] found id: ""
	I1105 19:19:32.890833   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.890844   74485 logs.go:284] No container was found matching "coredns"
	I1105 19:19:32.890851   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 19:19:32.890919   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 19:19:32.925244   74485 cri.go:89] found id: ""
	I1105 19:19:32.925273   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.925280   74485 logs.go:284] No container was found matching "kube-scheduler"
	I1105 19:19:32.925287   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 19:19:32.925352   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 19:19:32.959091   74485 cri.go:89] found id: ""
	I1105 19:19:32.959118   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.959129   74485 logs.go:284] No container was found matching "kube-proxy"
	I1105 19:19:32.959137   74485 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 19:19:32.959191   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 19:19:32.990230   74485 cri.go:89] found id: ""
	I1105 19:19:32.990264   74485 logs.go:282] 0 containers: []
	W1105 19:19:32.990276   74485 logs.go:284] No container was found matching "kube-controller-manager"
	I1105 19:19:32.990284   74485 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 19:19:32.990343   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 19:19:33.027461   74485 cri.go:89] found id: ""
	I1105 19:19:33.027494   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.027505   74485 logs.go:284] No container was found matching "kindnet"
	I1105 19:19:33.027512   74485 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1105 19:19:33.027574   74485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1105 19:19:33.070819   74485 cri.go:89] found id: ""
	I1105 19:19:33.070847   74485 logs.go:282] 0 containers: []
	W1105 19:19:33.070858   74485 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1105 19:19:33.070869   74485 logs.go:123] Gathering logs for kubelet ...
	I1105 19:19:33.070883   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 19:19:33.122580   74485 logs.go:123] Gathering logs for dmesg ...
	I1105 19:19:33.122615   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 19:19:33.136015   74485 logs.go:123] Gathering logs for describe nodes ...
	I1105 19:19:33.136043   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1105 19:19:33.213727   74485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1105 19:19:33.213750   74485 logs.go:123] Gathering logs for CRI-O ...
	I1105 19:19:33.213762   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 19:19:33.324287   74485 logs.go:123] Gathering logs for container status ...
	I1105 19:19:33.324333   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1105 19:19:33.384732   74485 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1105 19:19:33.384785   74485 out.go:270] * 
	W1105 19:19:33.384844   74485 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.384857   74485 out.go:270] * 
	W1105 19:19:33.385632   74485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 19:19:33.388860   74485 out.go:201] 
	W1105 19:19:33.390328   74485 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1105 19:19:33.390366   74485 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1105 19:19:33.390393   74485 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1105 19:19:33.391785   74485 out.go:201] 
	
	
	==> CRI-O <==
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.508189245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835036508167151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3c9d822-f7dd-4996-b0e6-db41f51e0c0c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.508749906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d53194f-e4a9-41ee-b7ba-f6a59ed6ab5b name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.508795388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d53194f-e4a9-41ee-b7ba-f6a59ed6ab5b name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.508826577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2d53194f-e4a9-41ee-b7ba-f6a59ed6ab5b name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.539903634Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3cf118e1-6aa5-47b8-aac2-5e76b1c80a34 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.539985335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3cf118e1-6aa5-47b8-aac2-5e76b1c80a34 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.541438889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60d7780f-2836-40f9-969c-d1d9533b9694 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.541892255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835036541840094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60d7780f-2836-40f9-969c-d1d9533b9694 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.542570148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2569ed15-d678-4f0b-8ee9-3ba74a58d105 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.542659281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2569ed15-d678-4f0b-8ee9-3ba74a58d105 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.542709761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2569ed15-d678-4f0b-8ee9-3ba74a58d105 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.574554532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82b83fac-5aad-4e1e-86c9-2d8059d03533 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.574638895Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82b83fac-5aad-4e1e-86c9-2d8059d03533 name=/runtime.v1.RuntimeService/Version
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.575806742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6520905-5f95-42e1-90ca-68b001c1432c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.576251475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835036576228243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6520905-5f95-42e1-90ca-68b001c1432c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.576808675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=491c396b-29ee-40a2-a42e-569b360906e0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.576852635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=491c396b-29ee-40a2-a42e-569b360906e0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.576892664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=491c396b-29ee-40a2-a42e-569b360906e0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.606937074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88f73199-02b3-4d7b-923f-0364300ef17c name=/runtime.v1.RuntimeService/Version
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.607029477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88f73199-02b3-4d7b-923f-0364300ef17c name=/runtime.v1.RuntimeService/Version
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.608275120Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfb7cb09-53a7-4648-b927-c8f2ca996bc0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.608653918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730835036608628596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfb7cb09-53a7-4648-b927-c8f2ca996bc0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.609305062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78a12104-ba77-4bae-8bc7-94dd06aec8ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.609364552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78a12104-ba77-4bae-8bc7-94dd06aec8ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 05 19:30:36 old-k8s-version-567666 crio[622]: time="2024-11-05 19:30:36.609400631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=78a12104-ba77-4bae-8bc7-94dd06aec8ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 5 19:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055631] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039673] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.010642] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.961684] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543338] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.991220] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +0.059812] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.048972] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.214500] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.145320] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.257311] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +6.641170] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[  +0.060122] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.800603] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[ +13.119531] kauditd_printk_skb: 46 callbacks suppressed
	[Nov 5 19:15] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Nov 5 19:17] systemd-fstab-generator[5393]: Ignoring "noauto" option for root device
	[  +0.071837] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:30:36 up 19 min,  0 users,  load average: 0.08, 0.08, 0.02
	Linux old-k8s-version-567666 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000d9180, 0xc000b7d548, 0x70c7020, 0x0, 0x0)
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc00088fc00)
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1245 +0x7e
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]: goroutine 166 [select]:
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000e4e410, 0xc000d7ec01, 0xc000d46b80, 0xc000d68f30, 0xc000c1da40, 0xc000c1da00)
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000d7ecc0, 0x0, 0x0)
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00088fc00)
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Nov 05 19:30:33 old-k8s-version-567666 kubelet[6838]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Nov 05 19:30:33 old-k8s-version-567666 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 05 19:30:33 old-k8s-version-567666 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 05 19:30:34 old-k8s-version-567666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 135.
	Nov 05 19:30:34 old-k8s-version-567666 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 05 19:30:34 old-k8s-version-567666 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 05 19:30:34 old-k8s-version-567666 kubelet[6847]: I1105 19:30:34.618747    6847 server.go:416] Version: v1.20.0
	Nov 05 19:30:34 old-k8s-version-567666 kubelet[6847]: I1105 19:30:34.619157    6847 server.go:837] Client rotation is on, will bootstrap in background
	Nov 05 19:30:34 old-k8s-version-567666 kubelet[6847]: I1105 19:30:34.621302    6847 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 05 19:30:34 old-k8s-version-567666 kubelet[6847]: W1105 19:30:34.622100    6847 manager.go:159] Cannot detect current cgroup on cgroup v2
	Nov 05 19:30:34 old-k8s-version-567666 kubelet[6847]: I1105 19:30:34.622307    6847 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 2 (226.356997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-567666" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (117.96s)

                                                
                                    

Test pass (243/314)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 30.17
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.2/json-events 16.53
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.14
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 101.39
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 131.71
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.49
35 TestAddons/parallel/Registry 17.07
37 TestAddons/parallel/InspektorGadget 10.84
40 TestAddons/parallel/CSI 58.18
41 TestAddons/parallel/Headlamp 17.89
42 TestAddons/parallel/CloudSpanner 6.65
43 TestAddons/parallel/LocalPath 57.03
44 TestAddons/parallel/NvidiaDevicePlugin 6.62
45 TestAddons/parallel/Yakd 11.91
48 TestCertOptions 56.46
49 TestCertExpiration 285.64
51 TestForceSystemdFlag 53.16
52 TestForceSystemdEnv 66.41
54 TestKVMDriverInstallOrUpdate 4.32
58 TestErrorSpam/setup 40.33
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.74
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.66
63 TestErrorSpam/stop 4.92
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 80.38
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 55.95
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.64
75 TestFunctional/serial/CacheCmd/cache/add_local 2.06
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 367.02
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.31
86 TestFunctional/serial/LogsFileCmd 1.33
87 TestFunctional/serial/InvalidService 4.52
89 TestFunctional/parallel/ConfigCmd 0.35
90 TestFunctional/parallel/DashboardCmd 29.85
91 TestFunctional/parallel/DryRun 0.27
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.95
97 TestFunctional/parallel/ServiceCmdConnect 8.49
98 TestFunctional/parallel/AddonsCmd 0.12
99 TestFunctional/parallel/PersistentVolumeClaim 46.09
101 TestFunctional/parallel/SSHCmd 0.38
102 TestFunctional/parallel/CpCmd 1.39
103 TestFunctional/parallel/MySQL 25.06
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.5
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
113 TestFunctional/parallel/License 0.57
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
116 TestFunctional/parallel/ProfileCmd/profile_list 0.35
117 TestFunctional/parallel/MountCmd/any-port 9.4
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
119 TestFunctional/parallel/Version/short 0.05
120 TestFunctional/parallel/Version/components 0.61
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.63
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.67
125 TestFunctional/parallel/ImageCommands/ImageBuild 9.7
126 TestFunctional/parallel/ImageCommands/Setup 1.73
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.88
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.98
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
134 TestFunctional/parallel/MountCmd/specific-port 1.88
144 TestFunctional/parallel/ServiceCmd/List 0.33
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.26
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
147 TestFunctional/parallel/ServiceCmd/Format 0.34
148 TestFunctional/parallel/MountCmd/VerifyCleanup 0.85
149 TestFunctional/parallel/ServiceCmd/URL 0.37
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 194.38
160 TestMultiControlPlane/serial/DeployApp 6.31
161 TestMultiControlPlane/serial/PingHostFromPods 1.13
162 TestMultiControlPlane/serial/AddWorkerNode 57.61
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
165 TestMultiControlPlane/serial/CopyFile 12.6
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.67
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
174 TestMultiControlPlane/serial/RestartCluster 351.73
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 76.72
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.81
181 TestJSONOutput/start/Command 51.22
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.65
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.6
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.6
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 83.82
213 TestMountStart/serial/StartWithMountFirst 24.64
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 27.05
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.7
218 TestMountStart/serial/VerifyMountPostDelete 0.38
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 22.96
221 TestMountStart/serial/VerifyMountPostStop 0.38
224 TestMultiNode/serial/FreshStart2Nodes 111.07
225 TestMultiNode/serial/DeployApp2Nodes 5.59
226 TestMultiNode/serial/PingHostFrom2Pods 0.75
227 TestMultiNode/serial/AddNode 47.35
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 7.19
231 TestMultiNode/serial/StopNode 2.19
232 TestMultiNode/serial/StartAfterStop 37.88
234 TestMultiNode/serial/DeleteNode 1.99
236 TestMultiNode/serial/RestartMultiNode 177.37
237 TestMultiNode/serial/ValidateNameConflict 41.08
244 TestScheduledStopUnix 112.22
248 TestRunningBinaryUpgrade 210.93
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 109.8
262 TestNetworkPlugins/group/false 4.27
266 TestNoKubernetes/serial/StartWithStopK8s 39.93
267 TestNoKubernetes/serial/Start 28.4
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
269 TestNoKubernetes/serial/ProfileList 0.96
270 TestNoKubernetes/serial/Stop 2.41
271 TestNoKubernetes/serial/StartNoArgs 59.78
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
273 TestStoppedBinaryUpgrade/Setup 2.32
274 TestStoppedBinaryUpgrade/Upgrade 110.66
283 TestPause/serial/Start 80.8
284 TestNetworkPlugins/group/auto/Start 83.73
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
286 TestNetworkPlugins/group/kindnet/Start 80.92
288 TestNetworkPlugins/group/auto/KubeletFlags 0.22
289 TestNetworkPlugins/group/auto/NetCatPod 10.22
290 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
291 TestNetworkPlugins/group/auto/DNS 0.14
292 TestNetworkPlugins/group/auto/Localhost 0.11
293 TestNetworkPlugins/group/auto/HairPin 0.12
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
295 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
296 TestNetworkPlugins/group/kindnet/DNS 0.18
297 TestNetworkPlugins/group/kindnet/Localhost 0.14
298 TestNetworkPlugins/group/kindnet/HairPin 0.15
299 TestNetworkPlugins/group/calico/Start 89
300 TestNetworkPlugins/group/custom-flannel/Start 103.45
301 TestNetworkPlugins/group/enable-default-cni/Start 137.78
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/calico/KubeletFlags 0.47
304 TestNetworkPlugins/group/calico/NetCatPod 10.91
305 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
306 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
307 TestNetworkPlugins/group/calico/DNS 0.24
308 TestNetworkPlugins/group/calico/Localhost 0.17
309 TestNetworkPlugins/group/calico/HairPin 0.15
310 TestNetworkPlugins/group/custom-flannel/DNS 0.2
311 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
312 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
313 TestNetworkPlugins/group/flannel/Start 72.29
314 TestNetworkPlugins/group/bridge/Start 114.77
317 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
318 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
323 TestStartStop/group/no-preload/serial/FirstStart 103.55
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
325 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
326 TestNetworkPlugins/group/flannel/NetCatPod 12.2
327 TestNetworkPlugins/group/flannel/DNS 0.2
328 TestNetworkPlugins/group/flannel/Localhost 0.15
329 TestNetworkPlugins/group/flannel/HairPin 0.15
331 TestStartStop/group/embed-certs/serial/FirstStart 59.31
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
333 TestNetworkPlugins/group/bridge/NetCatPod 12.27
334 TestNetworkPlugins/group/bridge/DNS 0.19
335 TestNetworkPlugins/group/bridge/Localhost 0.15
336 TestNetworkPlugins/group/bridge/HairPin 0.13
338 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.36
339 TestStartStop/group/no-preload/serial/DeployApp 11.29
340 TestStartStop/group/embed-certs/serial/DeployApp 9.29
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
343 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
350 TestStartStop/group/no-preload/serial/SecondStart 676.02
353 TestStartStop/group/embed-certs/serial/SecondStart 611.6
355 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 515.23
356 TestStartStop/group/old-k8s-version/serial/Stop 6.29
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
368 TestStartStop/group/newest-cni/serial/FirstStart 47.23
369 TestStartStop/group/newest-cni/serial/DeployApp 0
370 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
371 TestStartStop/group/newest-cni/serial/Stop 10.42
372 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/newest-cni/serial/SecondStart 36.12
374 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
377 TestStartStop/group/newest-cni/serial/Pause 4.25
x
+
TestDownloadOnly/v1.20.0/json-events (30.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-753477 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-753477 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (30.17314043s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (30.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1105 17:41:36.537237   15492 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1105 17:41:36.537310   15492 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-753477
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-753477: exit status 85 (60.486031ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-753477 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |          |
	|         | -p download-only-753477        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:41:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:41:06.404352   15504 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:41:06.404473   15504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:06.404484   15504 out.go:358] Setting ErrFile to fd 2...
	I1105 17:41:06.404491   15504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:06.404689   15504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	W1105 17:41:06.404845   15504 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19910-8296/.minikube/config/config.json: open /home/jenkins/minikube-integration/19910-8296/.minikube/config/config.json: no such file or directory
	I1105 17:41:06.405431   15504 out.go:352] Setting JSON to true
	I1105 17:41:06.406295   15504 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1408,"bootTime":1730827058,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 17:41:06.406390   15504 start.go:139] virtualization: kvm guest
	I1105 17:41:06.408736   15504 out.go:97] [download-only-753477] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 17:41:06.408871   15504 notify.go:220] Checking for updates...
	W1105 17:41:06.408882   15504 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball: no such file or directory
	I1105 17:41:06.410157   15504 out.go:169] MINIKUBE_LOCATION=19910
	I1105 17:41:06.411436   15504 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:41:06.412665   15504 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 17:41:06.413810   15504 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 17:41:06.414949   15504 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1105 17:41:06.417507   15504 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1105 17:41:06.417795   15504 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:41:06.517506   15504 out.go:97] Using the kvm2 driver based on user configuration
	I1105 17:41:06.517533   15504 start.go:297] selected driver: kvm2
	I1105 17:41:06.517542   15504 start.go:901] validating driver "kvm2" against <nil>
	I1105 17:41:06.517893   15504 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:41:06.518018   15504 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 17:41:06.532745   15504 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 17:41:06.532820   15504 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:41:06.533546   15504 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1105 17:41:06.533755   15504 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 17:41:06.533788   15504 cni.go:84] Creating CNI manager for ""
	I1105 17:41:06.533850   15504 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 17:41:06.533862   15504 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 17:41:06.533928   15504 start.go:340] cluster config:
	{Name:download-only-753477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-753477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:41:06.534161   15504 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:41:06.536084   15504 out.go:97] Downloading VM boot image ...
	I1105 17:41:06.536122   15504 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 17:41:16.035271   15504 out.go:97] Starting "download-only-753477" primary control-plane node in "download-only-753477" cluster
	I1105 17:41:16.035288   15504 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 17:41:16.137810   15504 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 17:41:16.137842   15504 cache.go:56] Caching tarball of preloaded images
	I1105 17:41:16.137985   15504 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 17:41:16.139797   15504 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1105 17:41:16.139817   15504 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1105 17:41:16.241425   15504 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 17:41:34.838677   15504 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1105 17:41:34.838766   15504 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1105 17:41:35.741809   15504 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1105 17:41:35.742151   15504 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/download-only-753477/config.json ...
	I1105 17:41:35.742181   15504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/download-only-753477/config.json: {Name:mkf6b30273fdecda7df50706e4b97d445489068c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:35.742328   15504 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 17:41:35.742496   15504 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-753477 host does not exist
	  To start a cluster, run: "minikube start -p download-only-753477"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-753477
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (16.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-083264 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-083264 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.525843823s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (16.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1105 17:41:53.379263   15492 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1105 17:41:53.379305   15492 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-083264
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-083264: exit status 85 (61.759759ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-753477 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | -p download-only-753477        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| delete  | -p download-only-753477        | download-only-753477 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| start   | -o=json --download-only        | download-only-083264 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | -p download-only-083264        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:41:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:41:36.892823   15777 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:41:36.892917   15777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:36.892925   15777 out.go:358] Setting ErrFile to fd 2...
	I1105 17:41:36.892930   15777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:36.893129   15777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 17:41:36.893645   15777 out.go:352] Setting JSON to true
	I1105 17:41:36.894452   15777 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1439,"bootTime":1730827058,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 17:41:36.894545   15777 start.go:139] virtualization: kvm guest
	I1105 17:41:36.896412   15777 out.go:97] [download-only-083264] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 17:41:36.896580   15777 notify.go:220] Checking for updates...
	I1105 17:41:36.897753   15777 out.go:169] MINIKUBE_LOCATION=19910
	I1105 17:41:36.898947   15777 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:41:36.899966   15777 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 17:41:36.901211   15777 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 17:41:36.902388   15777 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1105 17:41:36.904972   15777 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1105 17:41:36.905167   15777 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:41:36.936339   15777 out.go:97] Using the kvm2 driver based on user configuration
	I1105 17:41:36.936359   15777 start.go:297] selected driver: kvm2
	I1105 17:41:36.936364   15777 start.go:901] validating driver "kvm2" against <nil>
	I1105 17:41:36.936675   15777 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:41:36.936745   15777 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19910-8296/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1105 17:41:36.951343   15777 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1105 17:41:36.951386   15777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:41:36.951897   15777 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1105 17:41:36.952040   15777 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 17:41:36.952065   15777 cni.go:84] Creating CNI manager for ""
	I1105 17:41:36.952106   15777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1105 17:41:36.952114   15777 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 17:41:36.952159   15777 start.go:340] cluster config:
	{Name:download-only-083264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-083264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:41:36.952250   15777 iso.go:125] acquiring lock: {Name:mk8779307fc708ec02c37c53d2da6abbb9fc57e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:41:36.953852   15777 out.go:97] Starting "download-only-083264" primary control-plane node in "download-only-083264" cluster
	I1105 17:41:36.953870   15777 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:41:37.466848   15777 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 17:41:37.466880   15777 cache.go:56] Caching tarball of preloaded images
	I1105 17:41:37.467052   15777 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:41:37.469015   15777 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1105 17:41:37.469033   15777 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1105 17:41:37.569123   15777 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 17:41:51.765545   15777 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1105 17:41:51.765644   15777 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19910-8296/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-083264 host does not exist
	  To start a cluster, run: "minikube start -p download-only-083264"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-083264
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1105 17:41:53.949575   15492 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-133090 --alsologtostderr --binary-mirror http://127.0.0.1:38161 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-133090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-133090
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (101.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-019255 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-019255 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.394853989s)
helpers_test.go:175: Cleaning up "offline-crio-019255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-019255
--- PASS: TestOffline (101.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-320753
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-320753: exit status 85 (54.197495ms)

                                                
                                                
-- stdout --
	* Profile "addons-320753" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-320753"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-320753
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-320753: exit status 85 (51.952822ms)

                                                
                                                
-- stdout --
	* Profile "addons-320753" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-320753"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (131.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-320753 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-320753 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m11.706340377s)
--- PASS: TestAddons/Setup (131.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-320753 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-320753 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-320753 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-320753 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4c68727b-d745-4759-85fb-537736d0c04a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4c68727b-d745-4759-85fb-537736d0c04a] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004369465s
addons_test.go:633: (dbg) Run:  kubectl --context addons-320753 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-320753 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-320753 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.098729ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-xtz7j" [549ed7b1-2983-4fca-8715-25afc280c616] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002752043s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k2wqh" [b9f4e07d-8955-4605-8ecd-360952c67ad2] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003606584s
addons_test.go:331: (dbg) Run:  kubectl --context addons-320753 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-320753 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-320753 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.332783129s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 ip
2024/11/05 17:44:54 [DEBUG] GET http://192.168.39.201:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.07s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rzhh7" [8a3bdbdf-000d-4912-8fed-caf84253193c] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00470976s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-320753 addons disable inspektor-gadget --alsologtostderr -v=1: (5.835167773s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1105 17:44:56.196729   15492 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1105 17:44:56.201362   15492 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1105 17:44:56.201386   15492 kapi.go:107] duration metric: took 4.681943ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.690355ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-320753 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-320753 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [31f71a04-2c3e-4332-a49e-0d0a10061695] Pending
helpers_test.go:344: "task-pv-pod" [31f71a04-2c3e-4332-a49e-0d0a10061695] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [31f71a04-2c3e-4332-a49e-0d0a10061695] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003988162s
addons_test.go:511: (dbg) Run:  kubectl --context addons-320753 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-320753 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-320753 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-320753 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-320753 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-320753 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-320753 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [69731bf9-840a-4d23-aa3c-f8dca02e4628] Pending
helpers_test.go:344: "task-pv-pod-restore" [69731bf9-840a-4d23-aa3c-f8dca02e4628] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [69731bf9-840a-4d23-aa3c-f8dca02e4628] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00381775s
addons_test.go:553: (dbg) Run:  kubectl --context addons-320753 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-320753 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-320753 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-320753 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.04078329s)
--- PASS: TestAddons/parallel/CSI (58.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-320753 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-qtllw" [33783d9c-0ba5-44d2-82c5-436cc5e0c239] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-qtllw" [33783d9c-0ba5-44d2-82c5-436cc5e0c239] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-qtllw" [33783d9c-0ba5-44d2-82c5-436cc5e0c239] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005666715s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-320753 addons disable headlamp --alsologtostderr -v=1: (6.008013384s)
--- PASS: TestAddons/parallel/Headlamp (17.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-6gmwl" [3b17c3f9-1e9f-4858-8a66-1542b6d3bca5] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003811725s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-320753 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-320753 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320753 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bbe5b0de-bf49-4058-a6d5-7a11224da8a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bbe5b0de-bf49-4058-a6d5-7a11224da8a7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bbe5b0de-bf49-4058-a6d5-7a11224da8a7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004483437s
addons_test.go:906: (dbg) Run:  kubectl --context addons-320753 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 ssh "cat /opt/local-path-provisioner/pvc-dc83c679-ddcc-4681-bf85-ba96348fe5e0_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-320753 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-320753 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-320753 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.216422206s)
--- PASS: TestAddons/parallel/LocalPath (57.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rgxmq" [20281175-a7ec-44e4-a0f9-e0dd96dfe10c] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004007527s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-bsr4h" [ec886261-fadb-4e9c-b575-b55d3a800ff9] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00320496s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-320753 addons disable yakd --alsologtostderr -v=1: (5.904390631s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestCertOptions (56.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-358420 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1105 18:52:31.419279   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-358420 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (55.17814138s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-358420 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-358420 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-358420 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-358420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-358420
--- PASS: TestCertOptions (56.46s)

                                                
                                    
x
+
TestCertExpiration (285.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-099467 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-099467 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (40.643735687s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-099467 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-099467 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m3.874904123s)
helpers_test.go:175: Cleaning up "cert-expiration-099467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-099467
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-099467: (1.118928276s)
--- PASS: TestCertExpiration (285.64s)

                                                
                                    
x
+
TestForceSystemdFlag (53.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-354698 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-354698 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.186517199s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-354698 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-354698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-354698
--- PASS: TestForceSystemdFlag (53.16s)

                                                
                                    
x
+
TestForceSystemdEnv (66.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-082098 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-082098 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m5.640005893s)
helpers_test.go:175: Cleaning up "force-systemd-env-082098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-082098
--- PASS: TestForceSystemdEnv (66.41s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.32s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1105 18:52:02.435616   15492 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1105 18:52:02.435761   15492 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1105 18:52:02.464294   15492 install.go:62] docker-machine-driver-kvm2: exit status 1
W1105 18:52:02.464633   15492 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1105 18:52:02.464692   15492 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2631039632/001/docker-machine-driver-kvm2
I1105 18:52:02.743493   15492 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2631039632/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40] Decompressors:map[bz2:0xc0004b58f0 gz:0xc0004b58f8 tar:0xc0004b58a0 tar.bz2:0xc0004b58b0 tar.gz:0xc0004b58c0 tar.xz:0xc0004b58d0 tar.zst:0xc0004b58e0 tbz2:0xc0004b58b0 tgz:0xc0004b58c0 txz:0xc0004b58d0 tzst:0xc0004b58e0 xz:0xc0004b5900 zip:0xc0004b5910 zst:0xc0004b5908] Getters:map[file:0xc00193af10 http:0xc0008b95e0 https:0xc0008b9630] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1105 18:52:02.743568   15492 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2631039632/001/docker-machine-driver-kvm2
I1105 18:52:04.894800   15492 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1105 18:52:04.894905   15492 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1105 18:52:04.926507   15492 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1105 18:52:04.926538   15492 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1105 18:52:04.926595   15492 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1105 18:52:04.926620   15492 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2631039632/002/docker-machine-driver-kvm2
I1105 18:52:04.970529   15492 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2631039632/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40 0x530dd40] Decompressors:map[bz2:0xc0004b58f0 gz:0xc0004b58f8 tar:0xc0004b58a0 tar.bz2:0xc0004b58b0 tar.gz:0xc0004b58c0 tar.xz:0xc0004b58d0 tar.zst:0xc0004b58e0 tbz2:0xc0004b58b0 tgz:0xc0004b58c0 txz:0xc0004b58d0 tzst:0xc0004b58e0 xz:0xc0004b5900 zip:0xc0004b5910 zst:0xc0004b5908] Getters:map[file:0xc001e38200 http:0xc00076b220 https:0xc00076b310] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1105 18:52:04.970570   15492 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2631039632/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.32s)

                                                
                                    
x
+
TestErrorSpam/setup (40.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-985237 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-985237 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-985237 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-985237 --driver=kvm2  --container-runtime=crio: (40.326435059s)
--- PASS: TestErrorSpam/setup (40.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (4.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 stop: (1.601128988s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 stop: (1.500980371s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-985237 --log_dir /tmp/nospam-985237 stop: (1.818439864s)
--- PASS: TestErrorSpam/stop (4.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19910-8296/.minikube/files/etc/test/nested/copy/15492/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311365 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1105 17:54:06.921137   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:06.927522   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:06.938902   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:06.960273   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:07.001713   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:07.083260   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:07.244860   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:07.566562   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:08.208235   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:09.489798   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:12.052777   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:17.174909   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:27.416585   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:54:47.898692   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-311365 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m20.378660433s)
--- PASS: TestFunctional/serial/StartWithProxy (80.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1105 17:55:12.926820   15492 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311365 --alsologtostderr -v=8
E1105 17:55:28.860405   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-311365 --alsologtostderr -v=8: (55.951383807s)
functional_test.go:663: soft start took 55.952063986s for "functional-311365" cluster.
I1105 17:56:08.878571   15492 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (55.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-311365 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-311365 cache add registry.k8s.io/pause:3.1: (1.187396033s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-311365 cache add registry.k8s.io/pause:3.3: (1.232076772s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-311365 cache add registry.k8s.io/pause:latest: (1.216985432s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-311365 /tmp/TestFunctionalserialCacheCmdcacheadd_local2777640702/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 cache add minikube-local-cache-test:functional-311365
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-311365 cache add minikube-local-cache-test:functional-311365: (1.747011127s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 cache delete minikube-local-cache-test:functional-311365
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-311365
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311365 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.342185ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-311365 cache reload: (1.010207318s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 kubectl -- --context functional-311365 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-311365 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (367.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311365 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1105 17:56:50.784976   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:59:06.920961   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 17:59:34.632219   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-311365 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m7.01610562s)
functional_test.go:761: restart took 6m7.016239169s for "functional-311365" cluster.
I1105 18:02:23.988400   15492 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (367.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-311365 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-311365 logs: (1.30925731s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 logs --file /tmp/TestFunctionalserialLogsFileCmd1516281997/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-311365 logs --file /tmp/TestFunctionalserialLogsFileCmd1516281997/001/logs.txt: (1.323653207s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-311365 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-311365
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-311365: exit status 115 (280.294945ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.14:32209 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-311365 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-311365 delete -f testdata/invalidsvc.yaml: (1.045535257s)
--- PASS: TestFunctional/serial/InvalidService (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311365 config get cpus: exit status 14 (55.439625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311365 config get cpus: exit status 14 (54.160207ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-311365 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-311365 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 26459: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.85s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311365 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-311365 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.545985ms)

                                                
                                                
-- stdout --
	* [functional-311365] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:02:42.152601   25644 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:02:42.152737   25644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:02:42.152749   25644 out.go:358] Setting ErrFile to fd 2...
	I1105 18:02:42.152755   25644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:02:42.152953   25644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:02:42.153468   25644 out.go:352] Setting JSON to false
	I1105 18:02:42.154408   25644 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2704,"bootTime":1730827058,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:02:42.154538   25644 start.go:139] virtualization: kvm guest
	I1105 18:02:42.157011   25644 out.go:177] * [functional-311365] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:02:42.158437   25644 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:02:42.158449   25644 notify.go:220] Checking for updates...
	I1105 18:02:42.161516   25644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:02:42.162814   25644 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:02:42.164095   25644 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:02:42.165290   25644 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:02:42.166498   25644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:02:42.168107   25644 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:02:42.168509   25644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:02:42.168578   25644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:02:42.185055   25644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I1105 18:02:42.185507   25644 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:02:42.186148   25644 main.go:141] libmachine: Using API Version  1
	I1105 18:02:42.186173   25644 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:02:42.186501   25644 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:02:42.186693   25644 main.go:141] libmachine: (functional-311365) Calling .DriverName
	I1105 18:02:42.186943   25644 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:02:42.187264   25644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:02:42.187298   25644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:02:42.203486   25644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36585
	I1105 18:02:42.203999   25644 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:02:42.204506   25644 main.go:141] libmachine: Using API Version  1
	I1105 18:02:42.204530   25644 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:02:42.204989   25644 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:02:42.205161   25644 main.go:141] libmachine: (functional-311365) Calling .DriverName
	I1105 18:02:42.237284   25644 out.go:177] * Using the kvm2 driver based on existing profile
	I1105 18:02:42.238574   25644 start.go:297] selected driver: kvm2
	I1105 18:02:42.238592   25644 start.go:901] validating driver "kvm2" against &{Name:functional-311365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-311365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.14 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:02:42.238725   25644 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:02:42.241020   25644 out.go:201] 
	W1105 18:02:42.242182   25644 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1105 18:02:42.243222   25644 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311365 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-311365 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-311365 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.969066ms)

                                                
                                                
-- stdout --
	* [functional-311365] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:02:42.433787   25702 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:02:42.433908   25702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:02:42.433920   25702 out.go:358] Setting ErrFile to fd 2...
	I1105 18:02:42.433927   25702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:02:42.434322   25702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:02:42.435057   25702 out.go:352] Setting JSON to false
	I1105 18:02:42.436300   25702 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2704,"bootTime":1730827058,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:02:42.436437   25702 start.go:139] virtualization: kvm guest
	I1105 18:02:42.438570   25702 out.go:177] * [functional-311365] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1105 18:02:42.440861   25702 notify.go:220] Checking for updates...
	I1105 18:02:42.440873   25702 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:02:42.442268   25702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:02:42.443680   25702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:02:42.444902   25702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:02:42.446192   25702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:02:42.447440   25702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:02:42.449147   25702 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:02:42.449622   25702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:02:42.449711   25702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:02:42.468644   25702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38317
	I1105 18:02:42.469059   25702 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:02:42.469656   25702 main.go:141] libmachine: Using API Version  1
	I1105 18:02:42.469681   25702 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:02:42.470077   25702 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:02:42.470257   25702 main.go:141] libmachine: (functional-311365) Calling .DriverName
	I1105 18:02:42.470506   25702 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:02:42.470938   25702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:02:42.470990   25702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:02:42.486537   25702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I1105 18:02:42.487098   25702 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:02:42.488658   25702 main.go:141] libmachine: Using API Version  1
	I1105 18:02:42.488683   25702 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:02:42.488976   25702 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:02:42.489237   25702 main.go:141] libmachine: (functional-311365) Calling .DriverName
	I1105 18:02:42.528052   25702 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1105 18:02:42.529460   25702 start.go:297] selected driver: kvm2
	I1105 18:02:42.529478   25702 start.go:901] validating driver "kvm2" against &{Name:functional-311365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-311365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.14 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:02:42.529583   25702 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:02:42.531760   25702 out.go:201] 
	W1105 18:02:42.532944   25702 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1105 18:02:42.534140   25702 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-311365 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-311365 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-wdxlh" [1372a964-9d71-4438-94f6-c196a58e39c5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-wdxlh" [1372a964-9d71-4438-94f6-c196a58e39c5] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003905455s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.50.14:32617
functional_test.go:1675: http://192.168.50.14:32617: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-wdxlh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.14:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.14:32617
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [01851d23-2762-4b66-90a1-eaf72d57a209] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004797068s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-311365 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-311365 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-311365 get pvc myclaim -o=json
I1105 18:02:38.536091   15492 retry.go:31] will retry after 2.223848401s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:9eb45eac-4128-4023-b8b0-dd9ff9c53b19 ResourceVersion:463 Generation:0 CreationTimestamp:2024-11-05 18:02:38 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a33180 VolumeMode:0xc001a33190 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-311365 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-311365 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [065c6608-bdf1-4c43-b54f-c0e7ccc6c7aa] Pending
helpers_test.go:344: "sp-pod" [065c6608-bdf1-4c43-b54f-c0e7ccc6c7aa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [065c6608-bdf1-4c43-b54f-c0e7ccc6c7aa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003758853s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-311365 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-311365 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-311365 delete -f testdata/storage-provisioner/pod.yaml: (2.04187289s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-311365 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a1426146-ff4a-4824-9714-0fe892fb6f58] Pending
helpers_test.go:344: "sp-pod" [a1426146-ff4a-4824-9714-0fe892fb6f58] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a1426146-ff4a-4824-9714-0fe892fb6f58] Running
2024/11/05 18:03:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003951725s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-311365 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh -n functional-311365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 cp functional-311365:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1813867448/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh -n functional-311365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh -n functional-311365 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-311365 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-grdwd" [674eb0a3-a3f0-43e2-bfa5-248128659b21] Pending
helpers_test.go:344: "mysql-6cdb49bbb-grdwd" [674eb0a3-a3f0-43e2-bfa5-248128659b21] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-grdwd" [674eb0a3-a3f0-43e2-bfa5-248128659b21] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004245536s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-311365 exec mysql-6cdb49bbb-grdwd -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-311365 exec mysql-6cdb49bbb-grdwd -- mysql -ppassword -e "show databases;": exit status 1 (130.15161ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1105 18:03:08.876822   15492 retry.go:31] will retry after 601.141521ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-311365 exec mysql-6cdb49bbb-grdwd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/15492/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo cat /etc/test/nested/copy/15492/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/15492.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo cat /etc/ssl/certs/15492.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/15492.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo cat /usr/share/ca-certificates/15492.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/154922.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo cat /etc/ssl/certs/154922.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/154922.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo cat /usr/share/ca-certificates/154922.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-311365 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311365 ssh "sudo systemctl is-active docker": exit status 1 (253.299249ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311365 ssh "sudo systemctl is-active containerd": exit status 1 (256.116486ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-311365 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-311365 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-77pwl" [fa2fc2de-51cc-4be7-bfd3-14ada4b6852e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-77pwl" [fa2fc2de-51cc-4be7-bfd3-14ada4b6852e] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004075866s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "300.565605ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.787977ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdany-port1230835827/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730829752153415531" to /tmp/TestFunctionalparallelMountCmdany-port1230835827/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730829752153415531" to /tmp/TestFunctionalparallelMountCmdany-port1230835827/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730829752153415531" to /tmp/TestFunctionalparallelMountCmdany-port1230835827/001/test-1730829752153415531
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311365 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (242.629821ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1105 18:02:32.396341   15492 retry.go:31] will retry after 259.384559ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  5 18:02 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  5 18:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  5 18:02 test-1730829752153415531
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh cat /mount-9p/test-1730829752153415531
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-311365 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2a9ad33e-4970-4dba-a31e-000bb5e764b7] Pending
helpers_test.go:344: "busybox-mount" [2a9ad33e-4970-4dba-a31e-000bb5e764b7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2a9ad33e-4970-4dba-a31e-000bb5e764b7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2a9ad33e-4970-4dba-a31e-000bb5e764b7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004661179s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-311365 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdany-port1230835827/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "324.823163ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "59.00417ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311365 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-311365
localhost/kicbase/echo-server:functional-311365
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311365 image ls --format short --alsologtostderr:
I1105 18:02:51.735613   26584 out.go:345] Setting OutFile to fd 1 ...
I1105 18:02:51.735721   26584 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:02:51.735731   26584 out.go:358] Setting ErrFile to fd 2...
I1105 18:02:51.735735   26584 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:02:51.735932   26584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
I1105 18:02:51.736472   26584 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:02:51.736562   26584 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:02:51.736947   26584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:02:51.736981   26584 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:02:51.752203   26584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37983
I1105 18:02:51.752657   26584 main.go:141] libmachine: () Calling .GetVersion
I1105 18:02:51.753308   26584 main.go:141] libmachine: Using API Version  1
I1105 18:02:51.753340   26584 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:02:51.753681   26584 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:02:51.753863   26584 main.go:141] libmachine: (functional-311365) Calling .GetState
I1105 18:02:51.755622   26584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:02:51.755660   26584 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:02:51.770186   26584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
I1105 18:02:51.770616   26584 main.go:141] libmachine: () Calling .GetVersion
I1105 18:02:51.771073   26584 main.go:141] libmachine: Using API Version  1
I1105 18:02:51.771098   26584 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:02:51.771411   26584 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:02:51.771583   26584 main.go:141] libmachine: (functional-311365) Calling .DriverName
I1105 18:02:51.771819   26584 ssh_runner.go:195] Run: systemctl --version
I1105 18:02:51.771849   26584 main.go:141] libmachine: (functional-311365) Calling .GetSSHHostname
I1105 18:02:51.774503   26584 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:02:51.774912   26584 main.go:141] libmachine: (functional-311365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:5a:4c", ip: ""} in network mk-functional-311365: {Iface:virbr1 ExpiryTime:2024-11-05 18:54:06 +0000 UTC Type:0 Mac:52:54:00:c4:5a:4c Iaid: IPaddr:192.168.50.14 Prefix:24 Hostname:functional-311365 Clientid:01:52:54:00:c4:5a:4c}
I1105 18:02:51.774946   26584 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined IP address 192.168.50.14 and MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:02:51.775128   26584 main.go:141] libmachine: (functional-311365) Calling .GetSSHPort
I1105 18:02:51.775299   26584 main.go:141] libmachine: (functional-311365) Calling .GetSSHKeyPath
I1105 18:02:51.775436   26584 main.go:141] libmachine: (functional-311365) Calling .GetSSHUsername
I1105 18:02:51.775543   26584 sshutil.go:53] new ssh client: &{IP:192.168.50.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/functional-311365/id_rsa Username:docker}
I1105 18:02:51.902388   26584 ssh_runner.go:195] Run: sudo crictl images --output json
I1105 18:02:52.310406   26584 main.go:141] libmachine: Making call to close driver server
I1105 18:02:52.310421   26584 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:02:52.310721   26584 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:02:52.310752   26584 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 18:02:52.310756   26584 main.go:141] libmachine: (functional-311365) DBG | Closing plugin on server side
I1105 18:02:52.310765   26584 main.go:141] libmachine: Making call to close driver server
I1105 18:02:52.310774   26584 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:02:52.312261   26584 main.go:141] libmachine: (functional-311365) DBG | Closing plugin on server side
I1105 18:02:52.312294   26584 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:02:52.312311   26584 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311365 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/nginx                 | latest             | 3b25b682ea82b | 196MB  |
| localhost/kicbase/echo-server           | functional-311365  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-311365  | 24802ef4dd6bc | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-311365  | 9e5b71f8a3047 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311365 image ls --format table --alsologtostderr:
I1105 18:03:02.973480   26837 out.go:345] Setting OutFile to fd 1 ...
I1105 18:03:02.973580   26837 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:03:02.973588   26837 out.go:358] Setting ErrFile to fd 2...
I1105 18:03:02.973592   26837 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:03:02.973776   26837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
I1105 18:03:02.974302   26837 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:03:02.974401   26837 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:03:02.974752   26837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:03:02.974786   26837 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:03:02.989162   26837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33409
I1105 18:03:02.989665   26837 main.go:141] libmachine: () Calling .GetVersion
I1105 18:03:02.990220   26837 main.go:141] libmachine: Using API Version  1
I1105 18:03:02.990243   26837 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:03:02.990532   26837 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:03:02.990719   26837 main.go:141] libmachine: (functional-311365) Calling .GetState
I1105 18:03:02.992449   26837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:03:02.992501   26837 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:03:03.007050   26837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
I1105 18:03:03.007538   26837 main.go:141] libmachine: () Calling .GetVersion
I1105 18:03:03.008044   26837 main.go:141] libmachine: Using API Version  1
I1105 18:03:03.008067   26837 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:03:03.008405   26837 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:03:03.008618   26837 main.go:141] libmachine: (functional-311365) Calling .DriverName
I1105 18:03:03.008811   26837 ssh_runner.go:195] Run: systemctl --version
I1105 18:03:03.008835   26837 main.go:141] libmachine: (functional-311365) Calling .GetSSHHostname
I1105 18:03:03.011318   26837 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:03:03.011756   26837 main.go:141] libmachine: (functional-311365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:5a:4c", ip: ""} in network mk-functional-311365: {Iface:virbr1 ExpiryTime:2024-11-05 18:54:06 +0000 UTC Type:0 Mac:52:54:00:c4:5a:4c Iaid: IPaddr:192.168.50.14 Prefix:24 Hostname:functional-311365 Clientid:01:52:54:00:c4:5a:4c}
I1105 18:03:03.011815   26837 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined IP address 192.168.50.14 and MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:03:03.011934   26837 main.go:141] libmachine: (functional-311365) Calling .GetSSHPort
I1105 18:03:03.012086   26837 main.go:141] libmachine: (functional-311365) Calling .GetSSHKeyPath
I1105 18:03:03.012234   26837 main.go:141] libmachine: (functional-311365) Calling .GetSSHUsername
I1105 18:03:03.012367   26837 sshutil.go:53] new ssh client: &{IP:192.168.50.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/functional-311365/id_rsa Username:docker}
I1105 18:03:03.089511   26837 ssh_runner.go:195] Run: sudo crictl images --output json
I1105 18:03:03.124984   26837 main.go:141] libmachine: Making call to close driver server
I1105 18:03:03.124999   26837 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:03:03.125265   26837 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:03:03.125298   26837 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 18:03:03.125296   26837 main.go:141] libmachine: (functional-311365) DBG | Closing plugin on server side
I1105 18:03:03.125312   26837 main.go:141] libmachine: Making call to close driver server
I1105 18:03:03.125323   26837 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:03:03.125537   26837 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:03:03.125548   26837 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311365 image ls --format json --alsologtostderr:
[{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"1ec4282a2f535946a3dba32c4265df22f92bb4bde1202736cff382cda983d8f8","repoDigests":["docker.io/library/a9d93dc8d3bd4a3465edebd03c1e44f9b9cd7340081025c09156d65789622b44-tmp@sha256:a7330fe9f9dba00005c3d188dcdef9af3004ecfa984fe1165a401a98a9cc83d8"],"repoTags":[],"size":"1466018"},{"id":"9e5b71f8a30475bfb6154bb681912e44c5b9f44446349532b1076a13f5ee8899","repoDigests":[
"localhost/my-image@sha256:be3765bfb74055f557ce516d41bfa6a9f8c990710f30cae5a0541c6bc4a321b9"],"repoTags":["localhost/my-image:functional-311365"],"size":"1468600"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1
b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","doc
ker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818008"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-311365"],"size":"4943877"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","r
egistry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"24802ef4dd6bc6bc3a1dbe597a97e22837ce7024318963338a252cd2b2e7c70b","repoDigests":["localhost/minikube-local-cache-test@sha256:ed1c6c8435a9ab3dc330d2b2a369b900d34c591e473371c6fb0c4647e3a611e9"],"repoTags":["localhost/minikube-local-cache-test:functional-311365"],"size":"3330"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker
.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f
33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311365 image ls --format json --alsologtostderr:
I1105 18:03:02.739033   26813 out.go:345] Setting OutFile to fd 1 ...
I1105 18:03:02.739156   26813 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:03:02.739166   26813 out.go:358] Setting ErrFile to fd 2...
I1105 18:03:02.739172   26813 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:03:02.739454   26813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
I1105 18:03:02.740244   26813 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:03:02.740390   26813 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:03:02.740964   26813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:03:02.741019   26813 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:03:02.756905   26813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
I1105 18:03:02.757411   26813 main.go:141] libmachine: () Calling .GetVersion
I1105 18:03:02.758060   26813 main.go:141] libmachine: Using API Version  1
I1105 18:03:02.758094   26813 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:03:02.758402   26813 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:03:02.758604   26813 main.go:141] libmachine: (functional-311365) Calling .GetState
I1105 18:03:02.760508   26813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:03:02.760545   26813 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:03:02.775165   26813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
I1105 18:03:02.775494   26813 main.go:141] libmachine: () Calling .GetVersion
I1105 18:03:02.775923   26813 main.go:141] libmachine: Using API Version  1
I1105 18:03:02.775944   26813 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:03:02.776255   26813 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:03:02.776424   26813 main.go:141] libmachine: (functional-311365) Calling .DriverName
I1105 18:03:02.776599   26813 ssh_runner.go:195] Run: systemctl --version
I1105 18:03:02.776620   26813 main.go:141] libmachine: (functional-311365) Calling .GetSSHHostname
I1105 18:03:02.779269   26813 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:03:02.779684   26813 main.go:141] libmachine: (functional-311365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:5a:4c", ip: ""} in network mk-functional-311365: {Iface:virbr1 ExpiryTime:2024-11-05 18:54:06 +0000 UTC Type:0 Mac:52:54:00:c4:5a:4c Iaid: IPaddr:192.168.50.14 Prefix:24 Hostname:functional-311365 Clientid:01:52:54:00:c4:5a:4c}
I1105 18:03:02.779713   26813 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined IP address 192.168.50.14 and MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:03:02.779822   26813 main.go:141] libmachine: (functional-311365) Calling .GetSSHPort
I1105 18:03:02.779997   26813 main.go:141] libmachine: (functional-311365) Calling .GetSSHKeyPath
I1105 18:03:02.780163   26813 main.go:141] libmachine: (functional-311365) Calling .GetSSHUsername
I1105 18:03:02.780324   26813 sshutil.go:53] new ssh client: &{IP:192.168.50.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/functional-311365/id_rsa Username:docker}
I1105 18:03:02.865766   26813 ssh_runner.go:195] Run: sudo crictl images --output json
I1105 18:03:02.922627   26813 main.go:141] libmachine: Making call to close driver server
I1105 18:03:02.922653   26813 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:03:02.922932   26813 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:03:02.922946   26813 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 18:03:02.922982   26813 main.go:141] libmachine: Making call to close driver server
I1105 18:03:02.922992   26813 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:03:02.923226   26813 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:03:02.923248   26813 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311365 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-311365
size: "4943877"
- id: 24802ef4dd6bc6bc3a1dbe597a97e22837ce7024318963338a252cd2b2e7c70b
repoDigests:
- localhost/minikube-local-cache-test@sha256:ed1c6c8435a9ab3dc330d2b2a369b900d34c591e473371c6fb0c4647e3a611e9
repoTags:
- localhost/minikube-local-cache-test:functional-311365
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26
repoTags:
- docker.io/library/nginx:latest
size: "195818008"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311365 image ls --format yaml --alsologtostderr:
I1105 18:02:52.383092   26616 out.go:345] Setting OutFile to fd 1 ...
I1105 18:02:52.383285   26616 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:02:52.383314   26616 out.go:358] Setting ErrFile to fd 2...
I1105 18:02:52.383331   26616 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:02:52.383676   26616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
I1105 18:02:52.384610   26616 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:02:52.384796   26616 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:02:52.385418   26616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:02:52.385507   26616 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:02:52.404191   26616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41889
I1105 18:02:52.404968   26616 main.go:141] libmachine: () Calling .GetVersion
I1105 18:02:52.405588   26616 main.go:141] libmachine: Using API Version  1
I1105 18:02:52.405645   26616 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:02:52.406179   26616 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:02:52.406379   26616 main.go:141] libmachine: (functional-311365) Calling .GetState
I1105 18:02:52.408772   26616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:02:52.408824   26616 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:02:52.426577   26616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
I1105 18:02:52.427094   26616 main.go:141] libmachine: () Calling .GetVersion
I1105 18:02:52.427686   26616 main.go:141] libmachine: Using API Version  1
I1105 18:02:52.427711   26616 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:02:52.428011   26616 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:02:52.428207   26616 main.go:141] libmachine: (functional-311365) Calling .DriverName
I1105 18:02:52.428430   26616 ssh_runner.go:195] Run: systemctl --version
I1105 18:02:52.428461   26616 main.go:141] libmachine: (functional-311365) Calling .GetSSHHostname
I1105 18:02:52.431525   26616 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:02:52.431846   26616 main.go:141] libmachine: (functional-311365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:5a:4c", ip: ""} in network mk-functional-311365: {Iface:virbr1 ExpiryTime:2024-11-05 18:54:06 +0000 UTC Type:0 Mac:52:54:00:c4:5a:4c Iaid: IPaddr:192.168.50.14 Prefix:24 Hostname:functional-311365 Clientid:01:52:54:00:c4:5a:4c}
I1105 18:02:52.431865   26616 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined IP address 192.168.50.14 and MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:02:52.432000   26616 main.go:141] libmachine: (functional-311365) Calling .GetSSHPort
I1105 18:02:52.432149   26616 main.go:141] libmachine: (functional-311365) Calling .GetSSHKeyPath
I1105 18:02:52.432255   26616 main.go:141] libmachine: (functional-311365) Calling .GetSSHUsername
I1105 18:02:52.432383   26616 sshutil.go:53] new ssh client: &{IP:192.168.50.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/functional-311365/id_rsa Username:docker}
I1105 18:02:52.545404   26616 ssh_runner.go:195] Run: sudo crictl images --output json
I1105 18:02:52.799681   26616 main.go:141] libmachine: Making call to close driver server
I1105 18:02:52.799693   26616 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:02:52.799974   26616 main.go:141] libmachine: (functional-311365) DBG | Closing plugin on server side
I1105 18:02:52.800025   26616 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:02:52.800037   26616 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 18:02:52.800046   26616 main.go:141] libmachine: Making call to close driver server
I1105 18:02:52.800057   26616 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:02:52.800290   26616 main.go:141] libmachine: (functional-311365) DBG | Closing plugin on server side
I1105 18:02:52.800340   26616 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:02:52.800361   26616 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311365 ssh pgrep buildkitd: exit status 1 (248.352439ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image build -t localhost/my-image:functional-311365 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-311365 image build -t localhost/my-image:functional-311365 testdata/build --alsologtostderr: (9.213181284s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-311365 image build -t localhost/my-image:functional-311365 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1ec4282a2f5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-311365
--> 9e5b71f8a30
Successfully tagged localhost/my-image:functional-311365
9e5b71f8a30475bfb6154bb681912e44c5b9f44446349532b1076a13f5ee8899
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-311365 image build -t localhost/my-image:functional-311365 testdata/build --alsologtostderr:
I1105 18:02:53.281954   26702 out.go:345] Setting OutFile to fd 1 ...
I1105 18:02:53.282079   26702 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:02:53.282086   26702 out.go:358] Setting ErrFile to fd 2...
I1105 18:02:53.282090   26702 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:02:53.282279   26702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
I1105 18:02:53.282842   26702 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:02:53.283413   26702 config.go:182] Loaded profile config "functional-311365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:02:53.283802   26702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:02:53.283838   26702 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:02:53.300472   26702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38891
I1105 18:02:53.301000   26702 main.go:141] libmachine: () Calling .GetVersion
I1105 18:02:53.301684   26702 main.go:141] libmachine: Using API Version  1
I1105 18:02:53.301715   26702 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:02:53.302129   26702 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:02:53.302330   26702 main.go:141] libmachine: (functional-311365) Calling .GetState
I1105 18:02:53.304249   26702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1105 18:02:53.304292   26702 main.go:141] libmachine: Launching plugin server for driver kvm2
I1105 18:02:53.319410   26702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35229
I1105 18:02:53.319907   26702 main.go:141] libmachine: () Calling .GetVersion
I1105 18:02:53.320444   26702 main.go:141] libmachine: Using API Version  1
I1105 18:02:53.320472   26702 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 18:02:53.320777   26702 main.go:141] libmachine: () Calling .GetMachineName
I1105 18:02:53.320942   26702 main.go:141] libmachine: (functional-311365) Calling .DriverName
I1105 18:02:53.321099   26702 ssh_runner.go:195] Run: systemctl --version
I1105 18:02:53.321133   26702 main.go:141] libmachine: (functional-311365) Calling .GetSSHHostname
I1105 18:02:53.323967   26702 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:02:53.324329   26702 main.go:141] libmachine: (functional-311365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:5a:4c", ip: ""} in network mk-functional-311365: {Iface:virbr1 ExpiryTime:2024-11-05 18:54:06 +0000 UTC Type:0 Mac:52:54:00:c4:5a:4c Iaid: IPaddr:192.168.50.14 Prefix:24 Hostname:functional-311365 Clientid:01:52:54:00:c4:5a:4c}
I1105 18:02:53.324370   26702 main.go:141] libmachine: (functional-311365) DBG | domain functional-311365 has defined IP address 192.168.50.14 and MAC address 52:54:00:c4:5a:4c in network mk-functional-311365
I1105 18:02:53.324503   26702 main.go:141] libmachine: (functional-311365) Calling .GetSSHPort
I1105 18:02:53.324671   26702 main.go:141] libmachine: (functional-311365) Calling .GetSSHKeyPath
I1105 18:02:53.324819   26702 main.go:141] libmachine: (functional-311365) Calling .GetSSHUsername
I1105 18:02:53.324946   26702 sshutil.go:53] new ssh client: &{IP:192.168.50.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/functional-311365/id_rsa Username:docker}
I1105 18:02:53.439390   26702 build_images.go:161] Building image from path: /tmp/build.1069784096.tar
I1105 18:02:53.439465   26702 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1105 18:02:53.450189   26702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1069784096.tar
I1105 18:02:53.457438   26702 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1069784096.tar: stat -c "%s %y" /var/lib/minikube/build/build.1069784096.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1069784096.tar': No such file or directory
I1105 18:02:53.457481   26702 ssh_runner.go:362] scp /tmp/build.1069784096.tar --> /var/lib/minikube/build/build.1069784096.tar (3072 bytes)
I1105 18:02:53.481097   26702 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1069784096
I1105 18:02:53.490514   26702 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1069784096 -xf /var/lib/minikube/build/build.1069784096.tar
I1105 18:02:53.499268   26702 crio.go:315] Building image: /var/lib/minikube/build/build.1069784096
I1105 18:02:53.499339   26702 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-311365 /var/lib/minikube/build/build.1069784096 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1105 18:03:02.420212   26702 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-311365 /var/lib/minikube/build/build.1069784096 --cgroup-manager=cgroupfs: (8.920845004s)
I1105 18:03:02.420273   26702 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1069784096
I1105 18:03:02.431707   26702 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1069784096.tar
I1105 18:03:02.443699   26702 build_images.go:217] Built localhost/my-image:functional-311365 from /tmp/build.1069784096.tar
I1105 18:03:02.443737   26702 build_images.go:133] succeeded building to: functional-311365
I1105 18:03:02.443743   26702 build_images.go:134] failed building to: 
I1105 18:03:02.443765   26702 main.go:141] libmachine: Making call to close driver server
I1105 18:03:02.443774   26702 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:03:02.444030   26702 main.go:141] libmachine: (functional-311365) DBG | Closing plugin on server side
I1105 18:03:02.444047   26702 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:03:02.444061   26702 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 18:03:02.444081   26702 main.go:141] libmachine: Making call to close driver server
I1105 18:03:02.444090   26702 main.go:141] libmachine: (functional-311365) Calling .Close
I1105 18:03:02.444310   26702 main.go:141] libmachine: (functional-311365) DBG | Closing plugin on server side
I1105 18:03:02.444315   26702 main.go:141] libmachine: Successfully made call to close driver server
I1105 18:03:02.444331   26702 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.711436561s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-311365
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image load --daemon kicbase/echo-server:functional-311365 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-311365 image load --daemon kicbase/echo-server:functional-311365 --alsologtostderr: (1.123782853s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image load --daemon kicbase/echo-server:functional-311365 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-311365
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image load --daemon kicbase/echo-server:functional-311365 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image save kicbase/echo-server:functional-311365 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image rm kicbase/echo-server:functional-311365 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-311365
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 image save --daemon kicbase/echo-server:functional-311365 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-311365
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdspecific-port71386928/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311365 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.045858ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1105 18:02:41.761663   15492 retry.go:31] will retry after 619.323937ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdspecific-port71386928/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-311365 ssh "sudo umount -f /mount-9p": exit status 1 (210.618157ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-311365 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdspecific-port71386928/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 service list -o json
functional_test.go:1494: Took "258.62687ms" to run "out/minikube-linux-amd64 -p functional-311365 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.50.14:31509
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590957802/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590957802/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590957802/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-311365 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590957802/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590957802/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-311365 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590957802/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.50.14:31509
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-311365 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-311365
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-311365
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-311365
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-844661 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1105 18:04:06.921391   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-844661 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m13.747465518s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-844661 -- rollout status deployment/busybox: (4.247656414s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-lzhpc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-mwvv2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-vkchm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-lzhpc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-mwvv2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-vkchm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-lzhpc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-mwvv2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-vkchm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-lzhpc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-lzhpc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-mwvv2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-mwvv2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-vkchm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844661 -- exec busybox-7dff88458-vkchm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-844661 -v=7 --alsologtostderr
E1105 18:07:31.419539   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:31.425951   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:31.437300   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:31.458682   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:31.500069   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:31.581595   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:31.743032   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:32.064662   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:32.706208   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:33.987984   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:36.550294   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-844661 -v=7 --alsologtostderr: (56.778977525s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-844661 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 status --output json -v=7 --alsologtostderr
E1105 18:07:41.671669   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp testdata/cp-test.txt ha-844661:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661:/home/docker/cp-test.txt ha-844661-m02:/home/docker/cp-test_ha-844661_ha-844661-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m02 "sudo cat /home/docker/cp-test_ha-844661_ha-844661-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661:/home/docker/cp-test.txt ha-844661-m03:/home/docker/cp-test_ha-844661_ha-844661-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m03 "sudo cat /home/docker/cp-test_ha-844661_ha-844661-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661:/home/docker/cp-test.txt ha-844661-m04:/home/docker/cp-test_ha-844661_ha-844661-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m04 "sudo cat /home/docker/cp-test_ha-844661_ha-844661-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp testdata/cp-test.txt ha-844661-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m02:/home/docker/cp-test.txt ha-844661:/home/docker/cp-test_ha-844661-m02_ha-844661.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661 "sudo cat /home/docker/cp-test_ha-844661-m02_ha-844661.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m02:/home/docker/cp-test.txt ha-844661-m03:/home/docker/cp-test_ha-844661-m02_ha-844661-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m03 "sudo cat /home/docker/cp-test_ha-844661-m02_ha-844661-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m02:/home/docker/cp-test.txt ha-844661-m04:/home/docker/cp-test_ha-844661-m02_ha-844661-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m04 "sudo cat /home/docker/cp-test_ha-844661-m02_ha-844661-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp testdata/cp-test.txt ha-844661-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt ha-844661:/home/docker/cp-test_ha-844661-m03_ha-844661.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661 "sudo cat /home/docker/cp-test_ha-844661-m03_ha-844661.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt ha-844661-m02:/home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m02 "sudo cat /home/docker/cp-test_ha-844661-m03_ha-844661-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m03:/home/docker/cp-test.txt ha-844661-m04:/home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m04 "sudo cat /home/docker/cp-test_ha-844661-m03_ha-844661-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp testdata/cp-test.txt ha-844661-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1581451422/001/cp-test_ha-844661-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt ha-844661:/home/docker/cp-test_ha-844661-m04_ha-844661.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661 "sudo cat /home/docker/cp-test_ha-844661-m04_ha-844661.txt"
E1105 18:07:51.914059   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt ha-844661-m02:/home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m02 "sudo cat /home/docker/cp-test_ha-844661-m04_ha-844661-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 cp ha-844661-m04:/home/docker/cp-test.txt ha-844661-m03:/home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 ssh -n ha-844661-m03 "sudo cat /home/docker/cp-test_ha-844661-m04_ha-844661-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-844661 node delete m03 -v=7 --alsologtostderr: (15.899073094s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (351.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-844661 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1105 18:22:31.422853   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:23:54.485028   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:24:06.921764   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-844661 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m50.966532981s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (351.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-844661 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-844661 --control-plane -v=7 --alsologtostderr: (1m15.906122975s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-844661 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-437343 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1105 18:27:09.996589   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:27:31.419020   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-437343 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (51.216358015s)
--- PASS: TestJSONOutput/start/Command (51.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-437343 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-437343 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.6s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-437343 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-437343 --output=json --user=testUser: (6.603194023s)
--- PASS: TestJSONOutput/stop/Command (6.60s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-457530 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-457530 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.299212ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b67e20fb-3bb6-48fa-bb73-e248d7f13345","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-457530] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1fb6543-4f34-44b7-93fc-1beab4376442","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19910"}}
	{"specversion":"1.0","id":"e7e82e3c-aafc-463d-acce-b7e1cdd0d32e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"34bb2c83-86ed-400f-8701-fe3df7540659","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig"}}
	{"specversion":"1.0","id":"a9290e30-57c3-4aef-8c6a-91fbb4b354cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube"}}
	{"specversion":"1.0","id":"fd1815ea-d482-412c-a624-78506151ce72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f733b346-2212-4a6d-a972-95415191d6a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a9ac3ca-d805-4c90-a844-d33cce5a8334","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-457530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-457530
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (83.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-935164 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-935164 --driver=kvm2  --container-runtime=crio: (42.98865898s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-945026 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-945026 --driver=kvm2  --container-runtime=crio: (38.000467357s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-935164
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
E1105 18:29:06.920741   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-945026
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-945026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-945026
helpers_test.go:175: Cleaning up "first-935164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-935164
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-935164: (1.021624967s)
--- PASS: TestMinikubeProfile (83.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-009567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-009567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.644602356s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-009567 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-009567 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-024202 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-024202 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.045530378s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024202 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024202 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-009567 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024202 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024202 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-024202
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-024202: (1.276987842s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-024202
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-024202: (21.958863548s)
--- PASS: TestMountStart/serial/RestartStopped (22.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024202 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024202 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501442 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-501442 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.676480535s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-501442 -- rollout status deployment/busybox: (4.177966898s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-gdxjv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-l6lsk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-gdxjv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-l6lsk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-gdxjv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-l6lsk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-gdxjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-gdxjv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-l6lsk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501442 -- exec busybox-7dff88458-l6lsk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-501442 -v 3 --alsologtostderr
E1105 18:32:31.418941   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-501442 -v 3 --alsologtostderr: (46.78387844s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-501442 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp testdata/cp-test.txt multinode-501442:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp multinode-501442:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3513316962/001/cp-test_multinode-501442.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp multinode-501442:/home/docker/cp-test.txt multinode-501442-m02:/home/docker/cp-test_multinode-501442_multinode-501442-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m02 "sudo cat /home/docker/cp-test_multinode-501442_multinode-501442-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp multinode-501442:/home/docker/cp-test.txt multinode-501442-m03:/home/docker/cp-test_multinode-501442_multinode-501442-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m03 "sudo cat /home/docker/cp-test_multinode-501442_multinode-501442-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp testdata/cp-test.txt multinode-501442-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp multinode-501442-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3513316962/001/cp-test_multinode-501442-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp multinode-501442-m02:/home/docker/cp-test.txt multinode-501442:/home/docker/cp-test_multinode-501442-m02_multinode-501442.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442 "sudo cat /home/docker/cp-test_multinode-501442-m02_multinode-501442.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp multinode-501442-m02:/home/docker/cp-test.txt multinode-501442-m03:/home/docker/cp-test_multinode-501442-m02_multinode-501442-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m03 "sudo cat /home/docker/cp-test_multinode-501442-m02_multinode-501442-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp testdata/cp-test.txt multinode-501442-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3513316962/001/cp-test_multinode-501442-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt multinode-501442:/home/docker/cp-test_multinode-501442-m03_multinode-501442.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442 "sudo cat /home/docker/cp-test_multinode-501442-m03_multinode-501442.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 cp multinode-501442-m03:/home/docker/cp-test.txt multinode-501442-m02:/home/docker/cp-test_multinode-501442-m03_multinode-501442-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 ssh -n multinode-501442-m02 "sudo cat /home/docker/cp-test_multinode-501442-m03_multinode-501442-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-501442 node stop m03: (1.356217811s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-501442 status: exit status 7 (417.205511ms)

                                                
                                                
-- stdout --
	multinode-501442
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-501442-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-501442-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr: exit status 7 (410.980488ms)

                                                
                                                
-- stdout --
	multinode-501442
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-501442-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-501442-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:33:22.832699   44064 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:33:22.832950   44064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:33:22.832959   44064 out.go:358] Setting ErrFile to fd 2...
	I1105 18:33:22.832963   44064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:33:22.833124   44064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:33:22.833266   44064 out.go:352] Setting JSON to false
	I1105 18:33:22.833286   44064 mustload.go:65] Loading cluster: multinode-501442
	I1105 18:33:22.833311   44064 notify.go:220] Checking for updates...
	I1105 18:33:22.833681   44064 config.go:182] Loaded profile config "multinode-501442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:33:22.833701   44064 status.go:174] checking status of multinode-501442 ...
	I1105 18:33:22.834105   44064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:33:22.834161   44064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:33:22.849935   44064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I1105 18:33:22.850394   44064 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:33:22.850908   44064 main.go:141] libmachine: Using API Version  1
	I1105 18:33:22.850932   44064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:33:22.851284   44064 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:33:22.851481   44064 main.go:141] libmachine: (multinode-501442) Calling .GetState
	I1105 18:33:22.852961   44064 status.go:371] multinode-501442 host status = "Running" (err=<nil>)
	I1105 18:33:22.852978   44064 host.go:66] Checking if "multinode-501442" exists ...
	I1105 18:33:22.853283   44064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:33:22.853331   44064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:33:22.867993   44064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44721
	I1105 18:33:22.868389   44064 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:33:22.868839   44064 main.go:141] libmachine: Using API Version  1
	I1105 18:33:22.868858   44064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:33:22.869141   44064 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:33:22.869293   44064 main.go:141] libmachine: (multinode-501442) Calling .GetIP
	I1105 18:33:22.871662   44064 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:33:22.872051   44064 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:33:22.872082   44064 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:33:22.872282   44064 host.go:66] Checking if "multinode-501442" exists ...
	I1105 18:33:22.872574   44064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:33:22.872615   44064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:33:22.887437   44064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I1105 18:33:22.887848   44064 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:33:22.888271   44064 main.go:141] libmachine: Using API Version  1
	I1105 18:33:22.888293   44064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:33:22.888566   44064 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:33:22.888755   44064 main.go:141] libmachine: (multinode-501442) Calling .DriverName
	I1105 18:33:22.888893   44064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:33:22.888929   44064 main.go:141] libmachine: (multinode-501442) Calling .GetSSHHostname
	I1105 18:33:22.891933   44064 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:33:22.892355   44064 main.go:141] libmachine: (multinode-501442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:73:56", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:30:42 +0000 UTC Type:0 Mac:52:54:00:6c:73:56 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-501442 Clientid:01:52:54:00:6c:73:56}
	I1105 18:33:22.892379   44064 main.go:141] libmachine: (multinode-501442) DBG | domain multinode-501442 has defined IP address 192.168.39.235 and MAC address 52:54:00:6c:73:56 in network mk-multinode-501442
	I1105 18:33:22.892503   44064 main.go:141] libmachine: (multinode-501442) Calling .GetSSHPort
	I1105 18:33:22.892672   44064 main.go:141] libmachine: (multinode-501442) Calling .GetSSHKeyPath
	I1105 18:33:22.892821   44064 main.go:141] libmachine: (multinode-501442) Calling .GetSSHUsername
	I1105 18:33:22.892966   44064 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442/id_rsa Username:docker}
	I1105 18:33:22.973599   44064 ssh_runner.go:195] Run: systemctl --version
	I1105 18:33:22.979326   44064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:33:22.992810   44064 kubeconfig.go:125] found "multinode-501442" server: "https://192.168.39.235:8443"
	I1105 18:33:22.992838   44064 api_server.go:166] Checking apiserver status ...
	I1105 18:33:22.992879   44064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:33:23.005656   44064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1073/cgroup
	W1105 18:33:23.014889   44064 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1073/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 18:33:23.014962   44064 ssh_runner.go:195] Run: ls
	I1105 18:33:23.019406   44064 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1105 18:33:23.023427   44064 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I1105 18:33:23.023447   44064 status.go:463] multinode-501442 apiserver status = Running (err=<nil>)
	I1105 18:33:23.023455   44064 status.go:176] multinode-501442 status: &{Name:multinode-501442 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:33:23.023471   44064 status.go:174] checking status of multinode-501442-m02 ...
	I1105 18:33:23.023803   44064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:33:23.023840   44064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:33:23.038960   44064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37063
	I1105 18:33:23.039475   44064 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:33:23.039956   44064 main.go:141] libmachine: Using API Version  1
	I1105 18:33:23.039978   44064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:33:23.040284   44064 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:33:23.040453   44064 main.go:141] libmachine: (multinode-501442-m02) Calling .GetState
	I1105 18:33:23.041894   44064 status.go:371] multinode-501442-m02 host status = "Running" (err=<nil>)
	I1105 18:33:23.041908   44064 host.go:66] Checking if "multinode-501442-m02" exists ...
	I1105 18:33:23.042193   44064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:33:23.042236   44064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:33:23.057207   44064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I1105 18:33:23.057659   44064 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:33:23.058181   44064 main.go:141] libmachine: Using API Version  1
	I1105 18:33:23.058240   44064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:33:23.058538   44064 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:33:23.058723   44064 main.go:141] libmachine: (multinode-501442-m02) Calling .GetIP
	I1105 18:33:23.061510   44064 main.go:141] libmachine: (multinode-501442-m02) DBG | domain multinode-501442-m02 has defined MAC address 52:54:00:5c:7a:a5 in network mk-multinode-501442
	I1105 18:33:23.062008   44064 main.go:141] libmachine: (multinode-501442-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:7a:a5", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:31:46 +0000 UTC Type:0 Mac:52:54:00:5c:7a:a5 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-501442-m02 Clientid:01:52:54:00:5c:7a:a5}
	I1105 18:33:23.062044   44064 main.go:141] libmachine: (multinode-501442-m02) DBG | domain multinode-501442-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:5c:7a:a5 in network mk-multinode-501442
	I1105 18:33:23.062161   44064 host.go:66] Checking if "multinode-501442-m02" exists ...
	I1105 18:33:23.062464   44064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:33:23.062499   44064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:33:23.077898   44064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37499
	I1105 18:33:23.078363   44064 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:33:23.078835   44064 main.go:141] libmachine: Using API Version  1
	I1105 18:33:23.078867   44064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:33:23.079240   44064 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:33:23.079404   44064 main.go:141] libmachine: (multinode-501442-m02) Calling .DriverName
	I1105 18:33:23.079578   44064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:33:23.079600   44064 main.go:141] libmachine: (multinode-501442-m02) Calling .GetSSHHostname
	I1105 18:33:23.082147   44064 main.go:141] libmachine: (multinode-501442-m02) DBG | domain multinode-501442-m02 has defined MAC address 52:54:00:5c:7a:a5 in network mk-multinode-501442
	I1105 18:33:23.082527   44064 main.go:141] libmachine: (multinode-501442-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:7a:a5", ip: ""} in network mk-multinode-501442: {Iface:virbr1 ExpiryTime:2024-11-05 19:31:46 +0000 UTC Type:0 Mac:52:54:00:5c:7a:a5 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-501442-m02 Clientid:01:52:54:00:5c:7a:a5}
	I1105 18:33:23.082552   44064 main.go:141] libmachine: (multinode-501442-m02) DBG | domain multinode-501442-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:5c:7a:a5 in network mk-multinode-501442
	I1105 18:33:23.082711   44064 main.go:141] libmachine: (multinode-501442-m02) Calling .GetSSHPort
	I1105 18:33:23.082869   44064 main.go:141] libmachine: (multinode-501442-m02) Calling .GetSSHKeyPath
	I1105 18:33:23.083029   44064 main.go:141] libmachine: (multinode-501442-m02) Calling .GetSSHUsername
	I1105 18:33:23.083152   44064 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19910-8296/.minikube/machines/multinode-501442-m02/id_rsa Username:docker}
	I1105 18:33:23.165690   44064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:33:23.179150   44064 status.go:176] multinode-501442-m02 status: &{Name:multinode-501442-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:33:23.179183   44064 status.go:174] checking status of multinode-501442-m03 ...
	I1105 18:33:23.179513   44064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1105 18:33:23.179557   44064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1105 18:33:23.194728   44064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33015
	I1105 18:33:23.195201   44064 main.go:141] libmachine: () Calling .GetVersion
	I1105 18:33:23.195691   44064 main.go:141] libmachine: Using API Version  1
	I1105 18:33:23.195711   44064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 18:33:23.196044   44064 main.go:141] libmachine: () Calling .GetMachineName
	I1105 18:33:23.196210   44064 main.go:141] libmachine: (multinode-501442-m03) Calling .GetState
	I1105 18:33:23.197848   44064 status.go:371] multinode-501442-m03 host status = "Stopped" (err=<nil>)
	I1105 18:33:23.197863   44064 status.go:384] host is not running, skipping remaining checks
	I1105 18:33:23.197870   44064 status.go:176] multinode-501442-m03 status: &{Name:multinode-501442-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-501442 node start m03 -v=7 --alsologtostderr: (37.25603509s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-501442 node delete m03: (1.465446651s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (177.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501442 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1105 18:42:31.422703   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:43:49.998761   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:06.921620   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-501442 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m56.838986046s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501442 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (177.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-501442
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501442-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-501442-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.99751ms)

                                                
                                                
-- stdout --
	* [multinode-501442-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-501442-m02' is duplicated with machine name 'multinode-501442-m02' in profile 'multinode-501442'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501442-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-501442-m03 --driver=kvm2  --container-runtime=crio: (39.763058061s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-501442
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-501442: exit status 80 (214.639077ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-501442 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-501442-m03 already exists in multinode-501442-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-501442-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.08s)

                                                
                                    
x
+
TestScheduledStopUnix (112.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-956232 --memory=2048 --driver=kvm2  --container-runtime=crio
E1105 18:49:06.921517   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-956232 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.671210036s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-956232 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-956232 -n scheduled-stop-956232
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-956232 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1105 18:49:31.728766   15492 retry.go:31] will retry after 104.47µs: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.729975   15492 retry.go:31] will retry after 213.043µs: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.731109   15492 retry.go:31] will retry after 303.911µs: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.732248   15492 retry.go:31] will retry after 397.044µs: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.733370   15492 retry.go:31] will retry after 692.051µs: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.734496   15492 retry.go:31] will retry after 418.624µs: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.735619   15492 retry.go:31] will retry after 1.330323ms: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.737828   15492 retry.go:31] will retry after 1.217684ms: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.740019   15492 retry.go:31] will retry after 1.418634ms: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.742215   15492 retry.go:31] will retry after 4.546573ms: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.747418   15492 retry.go:31] will retry after 6.453411ms: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.754638   15492 retry.go:31] will retry after 5.858129ms: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.760932   15492 retry.go:31] will retry after 17.715278ms: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.779174   15492 retry.go:31] will retry after 27.279919ms: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
I1105 18:49:31.807433   15492 retry.go:31] will retry after 17.678951ms: open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/scheduled-stop-956232/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-956232 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-956232 -n scheduled-stop-956232
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-956232
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-956232 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-956232
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-956232: exit status 7 (64.398637ms)

                                                
                                                
-- stdout --
	scheduled-stop-956232
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-956232 -n scheduled-stop-956232
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-956232 -n scheduled-stop-956232: exit status 7 (62.700929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-956232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-956232
--- PASS: TestScheduledStopUnix (112.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (210.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1449180788 start -p running-upgrade-481717 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1449180788 start -p running-upgrade-481717 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m46.028966449s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-481717 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-481717 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m41.404237471s)
helpers_test.go:175: Cleaning up "running-upgrade-481717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-481717
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-481717: (1.174311109s)
--- PASS: TestRunningBinaryUpgrade (210.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-048420 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-048420 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (91.291799ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-048420] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (109.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-048420 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-048420 --driver=kvm2  --container-runtime=crio: (1m49.55156297s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-048420 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (109.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-929548 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-929548 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (782.244916ms)

                                                
                                                
-- stdout --
	* [false-929548] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:51:52.344956   52579 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:51:52.345205   52579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:51:52.345215   52579 out.go:358] Setting ErrFile to fd 2...
	I1105 18:51:52.345219   52579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:51:52.345455   52579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-8296/.minikube/bin
	I1105 18:51:52.346077   52579 out.go:352] Setting JSON to false
	I1105 18:51:52.347069   52579 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5654,"bootTime":1730827058,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 18:51:52.347125   52579 start.go:139] virtualization: kvm guest
	I1105 18:51:52.349505   52579 out.go:177] * [false-929548] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 18:51:52.350865   52579 notify.go:220] Checking for updates...
	I1105 18:51:52.350896   52579 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:51:52.352095   52579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:51:52.353410   52579 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-8296/kubeconfig
	I1105 18:51:52.354998   52579 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-8296/.minikube
	I1105 18:51:52.356365   52579 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 18:51:52.357546   52579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:51:52.359411   52579 config.go:182] Loaded profile config "NoKubernetes-048420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:51:52.359545   52579 config.go:182] Loaded profile config "cert-expiration-099467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:51:52.359690   52579 config.go:182] Loaded profile config "offline-crio-019255": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:51:52.359772   52579 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:51:53.070770   52579 out.go:177] * Using the kvm2 driver based on user configuration
	I1105 18:51:53.072078   52579 start.go:297] selected driver: kvm2
	I1105 18:51:53.072098   52579 start.go:901] validating driver "kvm2" against <nil>
	I1105 18:51:53.072113   52579 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:51:53.074357   52579 out.go:201] 
	W1105 18:51:53.075715   52579 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1105 18:51:53.077031   52579 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-929548 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-929548" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:51:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.74:8443
name: cert-expiration-099467
contexts:
- context:
cluster: cert-expiration-099467
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:51:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-099467
name: cert-expiration-099467
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-099467
user:
client-certificate: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/cert-expiration-099467/client.crt
client-key: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/cert-expiration-099467/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-929548

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-929548"

                                                
                                                
----------------------- debugLogs end: false-929548 [took: 3.335484413s] --------------------------------
helpers_test.go:175: Cleaning up "false-929548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-929548
--- PASS: TestNetworkPlugins/group/false (4.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-048420 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-048420 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.879804153s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-048420 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-048420 status -o json: exit status 2 (234.834755ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-048420","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-048420
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-048420 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-048420 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.403964838s)
--- PASS: TestNoKubernetes/serial/Start (28.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-048420 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-048420 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.339126ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-048420
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-048420: (2.408642868s)
--- PASS: TestNoKubernetes/serial/Stop (2.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (59.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-048420 --driver=kvm2  --container-runtime=crio
E1105 18:54:06.921022   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-048420 --driver=kvm2  --container-runtime=crio: (59.784807619s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (59.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-048420 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-048420 "sudo systemctl is-active --quiet service kubelet": exit status 1 (185.544585ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.226195456 start -p stopped-upgrade-026921 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.226195456 start -p stopped-upgrade-026921 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m3.156013368s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.226195456 -p stopped-upgrade-026921 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.226195456 -p stopped-upgrade-026921 stop: (1.433704395s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-026921 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-026921 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.065840975s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.66s)

                                                
                                    
x
+
TestPause/serial/Start (80.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-616842 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-616842 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m20.799852246s)
--- PASS: TestPause/serial/Start (80.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m23.728425386s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-026921
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m20.921441004s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-929548 "pgrep -a kubelet"
I1105 18:57:54.800434   15492 config.go:182] Loaded profile config "auto-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-929548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qwhqt" [5afc51c8-a148-423f-81c9-7a861cb0199e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qwhqt" [5afc51c8-a148-423f-81c9-7a861cb0199e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004150765s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dwmqp" [1a2526fe-b7e9-4dcc-85dc-4f8eeb325ead] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004946187s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-929548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-929548 "pgrep -a kubelet"
I1105 18:58:07.225916   15492 config.go:182] Loaded profile config "kindnet-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-929548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l7zrf" [410b3110-3879-44bb-a017-94406407fcf9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l7zrf" [410b3110-3879-44bb-a017-94406407fcf9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004386031s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-929548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m28.999704573s)
--- PASS: TestNetworkPlugins/group/calico/Start (89.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (103.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m43.449164832s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (103.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (137.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m17.783030841s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (137.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tr2nf" [5c1de0c0-d990-4a6d-83cd-18b0d0cd9b83] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004169362s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-929548 "pgrep -a kubelet"
I1105 18:59:56.740074   15492 config.go:182] Loaded profile config "calico-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-929548 replace --force -f testdata/netcat-deployment.yaml
I1105 18:59:57.602876   15492 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dwxrm" [b454a845-b7eb-413f-8b69-696ba406c01a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dwxrm" [b454a845-b7eb-413f-8b69-696ba406c01a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005084476s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-929548 "pgrep -a kubelet"
I1105 19:00:05.475173   15492 config.go:182] Loaded profile config "custom-flannel-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-929548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-shqvv" [26b5fed7-0d8d-45da-8dbe-71e7dd48c24e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-shqvv" [26b5fed7-0d8d-45da-8dbe-71e7dd48c24e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00426761s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-929548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-929548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m12.286329783s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (114.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1105 19:00:30.000105   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/addons-320753/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-929548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m54.769550863s)
--- PASS: TestNetworkPlugins/group/bridge/Start (114.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-929548 "pgrep -a kubelet"
I1105 19:00:51.792839   15492 config.go:182] Loaded profile config "enable-default-cni-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-929548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rdshx" [f5bebca8-2e24-49b7-b47e-d13f924d31dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rdshx" [f5bebca8-2e24-49b7-b47e-d13f924d31dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004189186s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-929548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-459223 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-459223 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m43.551286919s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kxp59" [009d0675-4399-47b4-a33e-0fb11cebae87] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003771663s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-929548 "pgrep -a kubelet"
I1105 19:01:43.665014   15492 config.go:182] Loaded profile config "flannel-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-929548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pqm9n" [ac8c0373-902c-40ee-b158-976ba32ccd8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pqm9n" [ac8c0373-902c-40ee-b158-976ba32ccd8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005503004s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-929548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-271881 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-271881 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (59.313440537s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-929548 "pgrep -a kubelet"
I1105 19:02:21.668544   15492 config.go:182] Loaded profile config "bridge-929548": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-929548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k2nw8" [712d61f7-da8b-4aba-bbcb-253891988f2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-k2nw8" [712d61f7-da8b-4aba-bbcb-253891988f2f] Running
E1105 19:02:31.418609   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003845554s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-929548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-929548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-608095 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1105 19:02:55.009207   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:02:55.015631   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:02:55.027078   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:02:55.048522   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:02:55.090058   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:02:55.171497   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:02:55.333566   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:02:55.655745   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:02:56.297943   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:02:57.579620   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:00.140936   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:01.007195   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:01.013622   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:01.025109   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:01.047032   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:01.088257   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:01.169700   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:01.331056   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:01.652515   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-608095 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m29.361444382s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-459223 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [41e0fdf3-739a-40b0-b6d3-166f4e3ef507] Pending
E1105 19:03:02.294760   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [41e0fdf3-739a-40b0-b6d3-166f4e3ef507] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1105 19:03:03.576904   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:05.263244   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:03:06.138729   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [41e0fdf3-739a-40b0-b6d3-166f4e3ef507] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004460766s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-459223 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-271881 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ceb3a79-8d8f-46a1-92ae-9d7ef5256c68] Pending
E1105 19:03:11.260935   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [0ceb3a79-8d8f-46a1-92ae-9d7ef5256c68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0ceb3a79-8d8f-46a1-92ae-9d7ef5256c68] Running
E1105 19:03:15.505620   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/auto-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004684269s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-271881 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-459223 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-459223 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-271881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-271881 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-608095 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [60cb45e2-148c-4641-8049-e602f75d631a] Pending
helpers_test.go:344: "busybox" [60cb45e2-148c-4641-8049-e602f75d631a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1105 19:04:22.946939   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/kindnet-929548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [60cb45e2-148c-4641-8049-e602f75d631a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003872931s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-608095 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-608095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-608095 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (676.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-459223 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1105 19:05:46.672498   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-459223 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (11m15.761763386s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-459223 -n no-preload-459223
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (676.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (611.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-271881 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1105 19:05:53.367732   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:54.649834   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:05:57.211262   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:02.332605   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:12.203726   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:12.574545   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:27.633930   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:06:33.056527   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-271881 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m11.35574636s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271881 -n embed-certs-271881
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (611.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (515.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-608095 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1105 19:07:14.019581   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:18.438857   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:21.924300   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:21.930703   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:21.942138   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:21.963484   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:22.004905   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:22.086845   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:22.248330   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:22.570093   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:23.212171   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:24.493737   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:27.055321   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:31.418786   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/functional-311365/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:32.177539   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:34.126243   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/calico-929548/client.crt: no such file or directory" logger="UnhandledError"
E1105 19:07:42.419824   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/bridge-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-608095 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (8m34.978920155s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-608095 -n default-k8s-diff-port-608095
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (515.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-567666 --alsologtostderr -v=3
E1105 19:07:49.556204   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/custom-flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-567666 --alsologtostderr -v=3: (6.291211413s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567666 -n old-k8s-version-567666: exit status 7 (62.609625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-567666 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-886087 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1105 19:30:52.081079   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/enable-default-cni-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-886087 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (47.232188955s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-886087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-886087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069189297s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-886087 --alsologtostderr -v=3
E1105 19:31:37.462259   15492 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/flannel-929548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-886087 --alsologtostderr -v=3: (10.422765394s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-886087 -n newest-cni-886087
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-886087 -n newest-cni-886087: exit status 7 (64.932772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-886087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-886087 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-886087 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (35.682463673s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-886087 -n newest-cni-886087
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-886087 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-886087 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-886087 --alsologtostderr -v=1: (1.919531884s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-886087 -n newest-cni-886087
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-886087 -n newest-cni-886087: exit status 2 (292.430485ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-886087 -n newest-cni-886087
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-886087 -n newest-cni-886087: exit status 2 (257.913166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-886087 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-886087 --alsologtostderr -v=1: (1.000741438s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-886087 -n newest-cni-886087
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-886087 -n newest-cni-886087
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.25s)

                                                
                                    

Test skip (39/314)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 2.94
265 TestNetworkPlugins/group/cilium 5.87
280 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-320753 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-929548 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-929548" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:51:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.74:8443
name: cert-expiration-099467
contexts:
- context:
cluster: cert-expiration-099467
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:51:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-099467
name: cert-expiration-099467
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-099467
user:
client-certificate: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/cert-expiration-099467/client.crt
client-key: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/cert-expiration-099467/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-929548

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-929548"

                                                
                                                
----------------------- debugLogs end: kubenet-929548 [took: 2.789887206s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-929548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-929548
--- SKIP: TestNetworkPlugins/group/kubenet (2.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-929548 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-929548" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19910-8296/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:51:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.74:8443
name: cert-expiration-099467
contexts:
- context:
cluster: cert-expiration-099467
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:51:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-099467
name: cert-expiration-099467
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-099467
user:
client-certificate: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/cert-expiration-099467/client.crt
client-key: /home/jenkins/minikube-integration/19910-8296/.minikube/profiles/cert-expiration-099467/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-929548

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-929548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-929548"

                                                
                                                
----------------------- debugLogs end: cilium-929548 [took: 5.703938691s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-929548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-929548
--- SKIP: TestNetworkPlugins/group/cilium (5.87s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-537175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-537175
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard